Prominent digital service providers capitalize on personal data, creating a power imbalance that dominates its users. One intuitive solution to protect the personal information about users, and...Show moreProminent digital service providers capitalize on personal data, creating a power imbalance that dominates its users. One intuitive solution to protect the personal information about users, and structurally resolve the power imbalance, is through a right to privacy. However, recent advancements of inductive algorithms in information technology (namely, artificial intelligence) prove privacy insufficient to curtail the power of these tech firms. Lastly, an alternative avenue is suggested that may prove fruitful, though remains unexplored to its full extent: reforming antitrust law.Show less
This paper applies Karl Marx’s definition of alienation to art produced by artificial intelligence system Dall-E. This is achieved by examining Dall-E’s productions through the lens of historic...Show moreThis paper applies Karl Marx’s definition of alienation to art produced by artificial intelligence system Dall-E. This is achieved by examining Dall-E’s productions through the lens of historic texts, namely Walter Benjamin’s text on ‘The Work of Art in the Age of Technological Reproducibility’ and Leo Tolstoy’s book ‘What is Art’, further supported by contemporary literature on artificial creativity in relation to the remaining role of the artist. The resulting analysis indicates that Dall-E’s production process is divided as framed by Marxist definitions, thereby making it difficult to trace artistic mastery. In the following section, the analysis of creativity results in the idea that alienation in Dall-E is better understood as a shaded artistic freedom. Contrastingly, in the final section Dall-E shows that it can overcome its own alienating aspects by becoming universal and multi-usable, aligning democratic results.Show less
Ondanks dat gendergelijkheid de wereldprioriteit van UNESCO is, is het geen realiteit. Hiermee worden vrouwen dagelijks geconfronteerd. Door de integratie van de artificiële intelligentie wordt de...Show moreOndanks dat gendergelijkheid de wereldprioriteit van UNESCO is, is het geen realiteit. Hiermee worden vrouwen dagelijks geconfronteerd. Door de integratie van de artificiële intelligentie wordt de ongelijkheid versterkt. Het blijkt dat de genderongelijkheid in A.I. wordt veroorzaakt door genderstereotyperingen met alle (fatale) gevolgen van dien voor vrouwen. Hier komt bij dat UNESCO’s Agenda 2030, die betrekking heeft op de totale wereldbevolking, afhankelijk is van de gendergelijkheid en de toepassing van artificiële intelligentie. Dit onderwerp verdient meer aandacht. Hierom gaat dit onderzoek in op UNESCO’s storytelling over de gender bias in artificiële intelligentie en in hoeverre UNESCO effectief is met het agenderen van dit issue middels storytelling. Om meer bewustwording hieromtrent te creëren gebruikt UNESCO storytelling als strategie, welke wordt geanalyseerd op basis van de theorieën van Joos (2021) en Kent (2015). Omdat dit issue opkomend is, wordt de storytelling rondom twee vergelijkbare issues geanalyseerd, namelijk: de (onder)vertegenwoordiging van vrouwen in de sportsector en in (het hogere onderwijs en) de wetenschap. Op basis van de geïdentificeerde patronen kan een uitspraak worden gedaan over de kans dat UNESCO succesvol zal zijn in het agenderen van het opkomende issue over de gender bias in artificiële intelligentie. Op basis van de bevindingen wordt de hoofdvraag beantwoord, welke als volgt luidt: ‘‘Op welke wijze gebruikt UNESCO storytelling om de gender bias in artificiële intelligentie te agenderen en hoe effectief is zij hierbij?’’ Uit de bevindingen blijkt dat de framing van het issue bepalend is. De gevolgen van de gender bias in A.I. wordt geframed als cruciaal en onmisbare capaciteit voor de realisatie van de Agenda 2030. Dankzij de nabijheid van het issue en de Agenda 2030 resoneert het publiek sneller met UNESCO’s storytelling. Het probleem van de gender bias in A.I. wordt concreet gemaakt waardoor sneller een verbinding wordt gemaakt met het publiek. Hierbij is het opvallend dat het issue abstract wordt gemaakt als UNESCO het issue framed als een mensenrechtenkwestie, zoals bij de kwestie rondom vrouwen in de sportsector. Dit heeft als gevolg dat het publiek minder snel resoneert met de storytelling met als gevolg dat er minder snel aandacht wordt gekregen voor het betreffende issue. UNESCO’s storytelling kent een dramatische, emotionele en politieke toon waarbij de gender bias in A.I. letterlijk een gezicht krijgt. De storytelling is een vertaling van de women empowerment als strategie van UNESCO. Dit onderzoek toont aan dat UNESCO succesvol is in de roep om politieke aandacht omtrent de gender bias in artificiële intelligentie door middel van storytelling aangezien organisaties, instituten en instellingen door UNESCO’s storytelling worden gemotiveerd om tot actie te komen. Echter, uit de analyse blijkt dat de bijdrage van de man achterwege wordt gehouden in de storytelling doordat de nadruk ligt op women empowerment. Dit doet af aan de effectiviteit van de storytelling aangezien de gender bias in A.I. wereldwijde gevolgen heeft. Ondanks het gedeeltelijke succes heeft UNESCO te maken met tegenwerkingen in het agenderen, oftewel: agenda denial. Tot op heden blijkt dat strategische tegenwerkingen zich uiten in de vorm van ontkenning van het probleem en tegenwerkingen gericht op het voorgenomen beleid. Vanuit het perspectief van Cobb en Ross (1997) worden deze vormen van tegenwerkingen in het agenderen geschaard onder de ‘lage kosten strategie’ en de ‘medium kosten strategie’. UNESCO weet de tegenwerkende krachten te overwinnen dankzij de concrete framing van de gender bias in A.I. en door concrete gebeurtenissen te benoemen in de storytelling. Naar verwachting zal toekomstig onderzoek andere vormen van agenda denial identificeren. Voortvloeiend op de hoofdvraag toetst dit onderzoek een hypothese, namelijk: ‘Indien de storytelling vanuit UNESCO een combinatie bevat van de zeven boeifactoren (Joos, 2021) en het identificatie-onderdeel waardoor het publiek resoneert met de storytelling (Kent, 2015), zal dit leiden tot succes in het agenderen van het issue van de gender bias in artificiële intelligentie op de politieke agenda van organisaties, instituten en instellingen’. Op basis van de bevindingen en de beantwoording van de onderzoeksvraag wordt de hypothese verworpen. Ondanks dat het issue van de gender bias in A.I. politieke aandacht krijgt dankzij UNESCO’s storytelling, blijkt de storytelling niet effectief te zijn. Gendergelijkheid in A.I. betreft een issue, die betrekking heeft op de totale populatie. Uitgaande hiervan is de bijdrage van de man in de gendergelijkheidsbevordering van belang. Echter, dit blijkt niet uit UNESCO’s storytelling. Vanuit het perspectief van Joos (2021) en Kent (2015) kan op basis van de respectievelijke boeifactoren en identificatie-theorie de vraag worden gesteld in hoeverre UNESCO het juiste publiek toespreekt. Vanuit een kritisch oogpunt, kent dit onderzoek een aantal kanttekeningen, namelijk: storytelling is niet in alle gevallen voldoende om agenda denial te doorbreken. Bovendien is een analyse vanuit het perspectief van storytelling niet alleszeggend over het agenderen van een onderwerp. Om de effectiviteit van de storytelling gender bias in A.I. op de agendavorming verder te onderzoeken, kan vervolgonderzoek worden gedaan naar de strategische vormen van agenda denial in een later stadium. Hierbij kan rekening worden gehouden met de bijdrage van de man in de gendergelijkheidsbevordering en de agendering op de politieke agenda elders, zoals specifiek de Europese Commissie. Tot slot wordt op basis van de bevindingen aanbevolen om de bijdrage van mannen op te nemen in de storytelling waardoor women empowerment als strategie naar de achtergrond treedt. Aangezien het een wereldwijde issue betreft, die eenieder aangaat, kan dit een positieve invloed hebben het beoogde resultaat. Zoals de vraag in de titel van dit onderzoek luidt: ‘Siri, is this really a man’s world?’, blijkt uit dit onderzoek dat het inderdaad een mannenwereld betreft, maar dat het niets zou zijn zonder bijdrage vanuit de vrouwen.Show less
A central tenet of the standard account of moral enhancement qua algorithmic technology is that it has the potential to solve the mega-problems of our time, such as global poverty or the climate...Show moreA central tenet of the standard account of moral enhancement qua algorithmic technology is that it has the potential to solve the mega-problems of our time, such as global poverty or the climate crisis. Thereby, it is simply assumed that the enhanced moral competence of individual agents will directly translate into solutions to our major moral problems. This paper sheds light on this key assumption and argues for a more sophisticated outlook on the potential effects of algorithmic moral enhancement. In particular, it is shown that our major moral problems are essentially political problems which are characterised by various kinds of dilemmas. The author shows that due to this peculiar nature of these problems, three distinct challenges arise when it comes to translating moral competence into political solutions. These challenges will have to be met by future proposals of algorithmic moral enhancement.Show less
The increasing reliance on ICT within the public sector has changed the working ways of governmental bureaucracies from a paper reality to a digital one, and governments are eager to use new...Show moreThe increasing reliance on ICT within the public sector has changed the working ways of governmental bureaucracies from a paper reality to a digital one, and governments are eager to use new technologies for their business operations and reap its benefits just as the private sector does. Since technological advancement is driven by the private sector, and humans are increasingly accustomed to the speed and efficiency that technology brings, citizens are expecting governments to adapt and digitize as well. As such, an important trend that is being experimented with is the usage of self-learning algorithms, particularly Artificial Intelligence or AI. Since AI runs on data, it is only logical that an organization such as the government which holds an abundance of data would like to put this to use. Data that is collected might hold certain patterns, if you can find such patterns and assume that the near future will not be much different from when the data was collected, predictions can be made. However, AI systems are often deemed opaque and inscrutable, and this can collide with the judicial accountability that governments have towards their citizens in the form of transparency. Based on the assumption that the information that is used by AI i.e. data and algorithms, is not similar to documentary information that governments are accustomed to, there are added obstacles for governments to overcome in order to achieve the desired effects of transparency. The goal of this research is to explore the barriers to transparency in governmental usage of AI in decision-making by analyzing governmental motivation towards (non-) transparency and how the complex nature of AI relates to this. The question that stems from this is: What are the obstacles related to being transparent in AI-assisted governmental decision-making? In the study, a comparison is made between the obstacles to transparency for documentary information and the obstacles that experts encounter in practice related to AI, a contribution follows. Based on the literature, it is hypothesized that governments are limited by privacy and safety issues, lack of expertise, cooperation and inadequate disclosure. The results show that the obstacles are more nuanced and an addition to the theory is appropriate. The most important findings being: that data and algorithms should not be treated as documentary information; the importance of the policy domain in determinant for the degree of transparency; that lack of cooperation causes multiple obstacles to transparency such as self-censoring, accountability issues, superficial debate, false promises, inability to explain and ill-suited systems; that more information disclosure isn’t always better; and that the public sector should rethink their overreliance on private sector business models. All these obstacles can be associated to losing sight of the fundamental function of government, serving citizens.Show less
This thesis investigates two interrelated issues: the tendency of automated decision-making (ADM) systems to exacerbate gender bias, and the extent to which current European Data Protection...Show moreThis thesis investigates two interrelated issues: the tendency of automated decision-making (ADM) systems to exacerbate gender bias, and the extent to which current European Data Protection legislation (GDPR) both promises and delivers a right to explanation of decisions reached by those systems. The thesis has high philosophical and societal relevance, and engages fluently with a variety of important discourses: technical discussions of artificial intelligence, feminist scholarship, and commentaries on EU legal texts. After an introduction on machine learning and algorithms, the thesis moves to examinating those parts in the GDPR that address ADM, in order to clarify the way they are regulated. In the second and in the third chapter, problems such as the black box, different types of bias, technological design and neutrality are discussed. Gender bias are presented and many cases are discussed in order to provide reason of this growing phenomenon. A central topic of investigation is that of data representativeness, or how women data lack from our daily infrastructure at a point that discrimination normally occurs. This thesis ultimately seeks to provide a new framework for the introduction of a new feminist ethics of technology, that addresses bias and data collection in an intersectional way and especially that claims for new regulations to be discussed.Show less
This study examines the impact of recently introduced personal data protection legislation of the European Union (EU) on the development of artificial intelligence (AI) in Europe. It compares the...Show moreThis study examines the impact of recently introduced personal data protection legislation of the European Union (EU) on the development of artificial intelligence (AI) in Europe. It compares the competitive position of the EU in relation to the United States (US), which is in many respects the market leader in the field of AI. It finds that the degree of freedom for the collection and handling of personal data by companies working with AI is central for the functioning of machine-learning and deep learning. The wide definition of personal data, as has been constituted by the EU in the Breyer case, has resulted in a wide variance of data being labelled personal data through the EU regulatory scope. Therefore, the legal framework in the EU that deals with nearly all data is the recently implemented General Data Protection Regulation (GDPR), since this is the appropriate framework for personal data. This study finds that the GDPR severely limits the freedom to collect and process personal data on the long-term by companies, most notably through the "right to be forgotten" and the "right to explanation". These rights, deriving from the GDPR, have several negative effects on the ability of European companies to use personal data to develop their AI systems, which negatively affect their competitive position vis-à-vis the US, which still has more relaxed data protection regulation. However, we are starting to see that the European data protection model is being implemented in other jurisdictions. The US has announced the California Consumer Protection Act, which echoes some key provisions from the GDPR, but is still being reviewed by the state of California. Moreover, due to several factors that are discussed in the study and the so-called “Brussel’s effect”, this study finds that it is highly likely that the trend of privacy norms stemming from the GDPR being copied by the US will continue and therefore will ‘level the playing field’ for European and American firms developing AI.Show less
Over the past few months, an increasing number of images and installations produced by artists using artificial intelligence (AI), have sold at major auction houses like Christie’s or Sotheby’s....Show moreOver the past few months, an increasing number of images and installations produced by artists using artificial intelligence (AI), have sold at major auction houses like Christie’s or Sotheby’s. This suggests that the artificial semi-automated production and reproduction of art is becoming part of established and recognized institutional art circuits. What remains less certain though, is the possibility of AI producing art fully autonomously, that is, without the artist’s input. There are currently several AI image-generating robots created with the aim to be as automatic as possible, among which is PIX18, built by the mechanical engineer Hod Lipson. Allegedly, this machine can apply physical pigment on canvas to make consistently diverse paintings with minimal human intervention. It is so skilled with the brushes, that its by-products have become practically indistinguishable from human-made paintings in aesthetic terms. With that in mind, the aim of this thesis will be to take PIX18 as a case study, and investigate the extent to which its depictions can be seen as artistically equivalent to human art. This will be done by comparing a painting called ‘Frightened Girl’ (2016) made by/with the robot, to another painting titled ‘Woman with a Flowered Hat’ (1963) made by the artist Roy Lichtenstein. In that fashion, this thesis will attempt to answer the following question: to what extent can PIX18’s ‘Frightened Girl’ (2016), be seen as artistically equivalent to Lichtenstein’s ‘Woman with a Flowered Hat’ (1963), and what does this say about current conceptions of art and future considerations of AI? Both paintings are reinterpretations of other images, and will be put in dialogue across two fundamental perspectives, those of the agent who made the paintings, and of the viewer who perceived or perceives them. Potentially, the insights reached in this thesis entail a reconsideration of contemporary art as an exclusively human phenomenon, and a reassessment of how far AI has come in the replication of complex human behaviors.Show less
This thesis explores the role and influence of artificial intelligence in our contemporary society through the inter-subjective and embodied experience of new media art. By exploring the...Show moreThis thesis explores the role and influence of artificial intelligence in our contemporary society through the inter-subjective and embodied experience of new media art. By exploring the intersecting lines between art, technology, cognition and society, it discusses the potential in artistic practices involving intelligent technologies, to reframe the way in which we engage with Artificial Intelligence. This research benefits from an interdisciplinary approach that reaches through and beyond the field of contemporary art to challenge the mainstream discussion on the future of A.I. and the ethics of its applications. In order to do so, this thesis begins by establishing an understanding of embodiment in light of postcognitive approaches that move away from the dualism of mind and matter and negate representationalist paradigms of perception. This forms the basis from which to explore the particular subjectivities and material encounters in the practice and experience of new media art that engages with intelligent technologies. This thesis uses specific case studies involving creative applications of A.I. in contemporary art to reveal our interaction and co-existence with intelligent technologies as intricate and uniquely entangled. With the support of scientific, philosophical and aesthetic theories, it ultimately argues that such applications of A.I. in artistic practice provide a unique perspective for a transformative encounter with regards to the self, to our performed reality and to our becoming in a posthumanist environment.Show less
Artificial Grammar Learning (AGL) is a powerful experimental paradigm for testing specific hypotheses about language acquisition but is limited because of its reliance on meaningless grammatical...Show moreArtificial Grammar Learning (AGL) is a powerful experimental paradigm for testing specific hypotheses about language acquisition but is limited because of its reliance on meaningless grammatical structures. Meanwhile, formal and computational semantics provide rigorous ways to define and calculate meanings for formal languages, but are typically only used to describe or simulate the linguistic competence of adult speakers. This thesis attempts to connect these two fields by proposing a new type of AGL experiment that uses a language with both a context-free syntax and a formally defined semantics which can be used to express spatial relationships between objects. Moreover, using a computer simulation in which an Intelligent Agent (IA) acquires such a language, it shows how this new paradigm can be used to test psycholinguistic hypotheses about the acquisition of both syntax and semantics.Show less
This thesis discusses the way in which we construct the identity of artificial intelligence through science fiction film. It examines how sympathetic treatment of artificial intelligence in this...Show moreThis thesis discusses the way in which we construct the identity of artificial intelligence through science fiction film. It examines how sympathetic treatment of artificial intelligence in this genre may induce empathy in its audience, and how could this sway the artificial intelligence debate when it enters the political sphere. The paper first provides discussions of the artificial intelligence debate, the effect on viewer emotion films can have, and the extent to which humans can empathize with artificial intelligence. The paper then uses three science fiction films – Interstellar (2014), A.I. Artificial Intelligence (2001) and Her (2013) – to demonstrate the effects such films can have on viewer emotion and discuss the possible repercussions sympathetic treatment of AI could have on the human race. The essay warns against this attitude due to the significant dangers the unchecked development of AI could pose to the human race, and suggests precautionary steps to be taken in the field of education.Show less