AI products cannot be moral agents. But people, corporations and governments can and should be was published on The Toronto Star. (Together with Nizan Geslevich Packin)
A research group I participate in - 'Healthy Online Conversations' - presented in the 'Conversations Online' workshop. We shared some of our experience as consultants for a leading company that designs a social engagement platform. The outcome of this workshop will be an edited book on the topic, and we are authoring a chapter.
Central Bank Digital Currencies could mean the End of Democracy was published on The Conversation and The National Post. I argue that central banks worldwide are racing to implement digital legal tenders. While they prepare for the day when the economic and technological benefits outweigh the risks, democratic considerations are hardly discussed in public - and this has got to change. (Originally titled 'Democracy is Beyond the Mandate of Central Banks')
Collaborated with Nizan Geslevich Packin to publish an op-ed in the Israeli financial newspaper - Calcalist, about The Rights of Algorithms and Assigning Liability to Software Corporations, a moment before the Israeli National Artificial Intelligence Program, led by the Ministry of Innovation, Science and Technology, is launched [Hebrew].
Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several trust scholars and AI ethicists argue against using this concept. It has been labelled as a "misnomer", "conceptual misunderstanding", and "conceptual nonsense". Instead, they often suggest shifting our paradigm from 'Trustworthy AI' to 'Reliable AI'. I explain exactly why and review existing criticisms about using the concept of 'Trustworthy AI'. Ultimately, ignoring the criticisms will likely lead to mistrusting non-moral agents. By doing so, AI designers, regulators, investors, and other stakeholders risk attributing responsibilities to agents who cannot be held responsible, and consequently, deteriorate social structures which regard accountability and liability. I argue that, realistically, the concept of 'Trustworthy AI' has already been widely adopted by the AI community - industry, civil society, policymakers, and academic researchers. Therefore, it is not likely that the paradigm will be shifted. If we wish to be practical, we should adopt a view of the field of AI Ethics as focusing on power, social justice, and scholarly activism. I suggest that community-driven and social justice-oriented ethicists of AI and trust scholars draw attention to critical social aspects highlighted by phenomena of distrust and focus on democratic aspects of trust formation. This way, it will be possible to further reveal shifts in power relations, challenge unfair status quos, and suggest meaningful ways to keep the interests of citizens in the era of the conceptual nonsense 'Trustworthy AI'.
I present what central bank digital currency (CBDC) is and how this new currency is different from the digital digits we see in credit card statements and bank accounts. First, I discuss the significant benefits of implementing CBDC and share some of the open technical decisions that designers of the system face. Afterwards, I focus on its development and implementation motivations - innovation from the fintech sector, and risk and competition from decentralized cryptocurrencies, centralized stablecoins, and currencies of other nations. Finally, I identify six categories of ethical concerns related to CBDC. My main argument is that using data from such a system leaves the door open for authorities to influence social norms through surveilling and controlling financial activities. Therefore, even in liberal democracies, giving up on financial privacy - the ability to trade without any third party involved - not only leads to the loss of anonymity but also to a constant risk of losing freedom.