|Projekttitel||Addressing Digital deception in the Era of AI|
|Projekttype||Anvendt forskning og udvikling|
|Teaser||What does the current ethical landscape look like within the STEM field in a Danish context?|
|- Akademi||Københavns Erhvervsakademi (KEA)|
|- Kontaktperson||Ann Katrine Miranda
|Projektperiode||20. september 2018 - 31. december 2019|
In short, the automation of knowledge through the use of code and data has brought new ethical concerns with it that professionals within the world of digital developement are not yet trained to neither identify nor mediate. This project whishes to explore how to produce educational content (tool kit or educational game) and appropriate delivery methods that contribute to raising ethics awareness and critical thinking skills within the digital educational setting.
|- Baggrund og formål||
Background: Digital Deception – an Issue about Ethics
Sociologist Christina Nippert-Eng has developed the concept of digital camouflage by drawing on traditional camouflage practices in areas such as warfare and the animal world. By juxtaposing these offline worlds with the online, she has created a space for thinking about face-to-face as well as digital deception, by defining digital camouflage as ” (…) false and misleading digital representations, used in order to deceive a target about the producers’ presence, true nature, and/or intent” (Nippert-Eng 2017: 50). Examples of digital camouflage includes fake news – meticulously crafted to appear real. Fake people – social bots that are merely algoritms posting and interacting on social media or trolls, a person who starts quarrels or upsets people on the internet to distract and sow discord by posting inflammatory and digressive, extraneous, or off-topic messages in an online community, often by way of fake profiles, and deepfake, an image synthesis technique, used to superimpose existing videos and images onto source images or videos, in order to swap one face for another in an image or a video. Nippert-eng goes on to argue that in order to engage in digital camouflage, “(…) you must know what is normal and expected in a given environment, and be able to execute that flawlessly” (Ibid.). However, in order to detect digital camouflage, “(…) you must not only know that camouflage like this exists, you also have to look for it, and be willing and able to find it” (Ibid.).
An excellent example of digital camouflage is the recent Cambridge Analytica scandal. In 2013, Cambridge University researcher Aleksandr Kogan created a personality quiz which he launched on Facebook. About 300,000 users signed up for it, and agreed to give its creator access to their information. But Dr. Kogan also harvested information from users’ entire network of friends without their permission, which enabled him to see details from as many as 87 million users. The app gathered a host of information from users, including their gender, location, political and religious views, their private messages, what websites they’d liked, and any publicly available information on their social media profile. Dr. Kogan, through his company Global Science Research, sold Facebook users’ information to the political data firm Cambridge Analytica. It used the information to create a personality prediction tool, and to create highly specific advertisements designed to influence individual voters. The information was allegedly used in the 2016 US election as part of Donald Trump’s election campaign.
The concept of digital camouflage and the Cambridge Analytica scandal sets the scene for this project, with the aim of addressing digital deception in the era of AI by developing an elective course offered to the KEA students at DIGITAL in line with Nippert-Eng’s assertion on engaging, detecting and ultimately exposing digital deception.
Relevance for SMVs and the Industries
The issue of digital deception, such as fake news is not only relevant within a political context. Both large corporations as well as small business are more interested in image control and branding than previously. The spread of fake news and alternative or fabricated facts leading to the phenomenon of a ‘shitstorm’ has the potential to greatly affect any business with an online presence. Many companies have responded to this concrete threat with a very strong PR strategy across all channels, in which highly personalised responses are provided to costumers that give negative (and positive) reviews or comments on Facebook, Instagram, Twitter etc. This is done in order to control their image as well as be quick to counteract any image attack, but is a labor intensive short term solutions to a growing and lasting problem. Cyber warfare and fake news attacks are therefore highly relevant concerns, not only for politicians and public agencies, but also for large companies and small businesses. It is therefore necessary for both large companies and small businesses to safeguard both their security and their image online.
According to Danish Senior IT Consultant Christina Bertelsen, the phenomenon of digital deception needs to be addressed as early as possible, and he calls for the training of students in digital deception and critical thinking, since both modern communication technology and our culture in general contribute to people today making quicker decisions online than preciously seen – as well as relying heavily on authoritative sources such as search engines.
“But if we want to avoid digital deception getting too much of a grip, we need to ensure that students are trained to think critically about social media and search engines” (https://deadline.dk/fake-news-kan-koste-danske-virksomheder-milliarder-i-fremtiden/).
Kasper Holst Hansen, the Danish CEO and founder of EduLab, the company behind the most popular digital math portal in Denmark, also argues that that digital ethics will be a competitive advantage for a short period only – rather it will become a precondition for every company just like paying employees a salary. Many companies and agencies alike currently view ethics as a a hinderance that slows innovations down, however, this project has the ambition of helping to turn that rhetoric on its head, by illustrating that ethics can actually open up new challenges that computer scientists, engineers and entrepreneurs can decide to focus on that address a broad range of societal issues. This requires a shift in company culture, in which we can imagine a future where an in-house ethicist will be able to answer questions and facilitate step-by-step development, much like the current role of the Data Protection Officer in relations to data privacy.
|- Aktiviteter og handling||
opstartsmøde januar 2019
Møde med ekstern partner februar 2019
dataindsamling forår 2019
dataanalyse efterår 2019
arbejdspapir jul 2019 (mapping af fakeness (bots, trolls, deep fake) med relevans for danske SMV’er, og deres strategier til at herfor)
Tool kit udvikling forår 2020
tool kit leverance og Projekt afslutning sommer 2020
|- Projektets Metode||
qualitative data collection:
– semistructured interviews
– participant observation
Qualitative data analysis:
– workshops for analysis
– paper writing
– tool development
|- Projektets Forventede Resultater||
Artikel med overblik over fakeness med relevans for danske SMV’er
|- Projektets Forventede Effekt|
|Tags||education | ethics | technology|
|- Medarbejdere||Københavns Erhvervsakademi (KEA) |
Ann Katrine MirandaAnders DollerupJulia PolinnaJanus Pedersen
|- Resultatets formidling|
|- Resultaternes værdi|