On 6 February 2020, the COMPACT project consortium was represented at a Council of Europe exchange with internet companies. European DIGITAL SME Alliance, which is a partner of COMPACT, had signed a memorandum of understanding with the Council of Europe in 2017 to exchange on issues related to human rights and the internet.
The exchange focused on four topics: Regulation of Artificial Intelligence, Facial recognition, Hate speech and Digital literacy. In the context of hate speech, Annika Linck from DIGITAL SME presented some insights from the COMPACT Brussels symposium as well as the views of DIGITAL SME.
Regulation of AI: is a binding instrument establishing a global benchmark based on human rights, the rule of law and democracy the way forward?
As a business organisation, DIGITAL SME believes that Europe needs to develop AI capacity as the efficiency of the solutions provided will determine consumer choice. At the same time, the solutions need to respect European laws and regulations. In the the future, DIGITAL SME experts will be able to provide policy recommendations from the business perspective in the area Artificial Intelligence due to a new SME Focus Group on AI. DIGITAL SME will be launching an SME Focus Group on AI that will monitor the impact of regulation on small businesses. This will allow DIGITAL SME to provide recommendations on how regulation should be designed to avoid that it negatively affects SMEs and innovation.
Facial recognition: is there a need for specific guidance in this field?
As a business organisation, DIGITAL SME does not have a specific position on regulation of facial recognition. From a personal point of view, Annika Linck argues that we need to talk also about the right to anonymity.
Hate speech: what is the best governance model?
As part of a Horizon 2020 project, the COMPACT consortium is analysing the impact of technology convergence in the area of social media. COMPACT is mapping the regulatory and non-regulatory as well as policy initiatives of European governments to tackle issues under the umbrella concept of information disorder: misinformation, disinformation and mal-information.
Platforms have taken on the role of media channels, but in contrast to traditional media providers, they are not responsible for the content. At the same time, they rely heavily on the traffic generated by users.
General context of the research and discussion from the Brussels COMPACT symposium:
- Regulation in the areas affecting social media are still treated separately: media regulation vs. telecoms vs. internet governance rather than regulation, etc. Some transversal regulation such as data protection, e-commerce, etc.
- Even though social media get a lot of blame as to spreading false information, traditional media is not immune to false information either. However, one can argue that some technological developments and the way social media work seem to amplify the spread of false information.
- Two main aspects to consider here are: Different actors that can contribute and generate information on the internet (internet 3.0) and role of platforms, which are not the content generators, but nonetheless make their advertisement revenues on the basis of shared content.
- Share of advertisement in revenue according to Digg/Virtual Capitalist compiled on the basis of Annual Reports 2018: Facebook ads contribute to more than 90% (98.5% of Facebook’s revenues in 2018, advertising from various Google properties (Alphabet), which include YouTube, forms 70.4% of Alphabet’s revenues (Google’s parent company).
- An analysis of regulatory and non-regulatory initiatives in different member states of the EU revealed that the majority is still focusing on non-regulatory initiatives.
Digital literacy: how to set digital citizenship education as a priority in our member states?
According to the Council of Europe, “Digital Citizenship refers to the ability to engage positively, critically and competently in the digital environment, drawing on the skills of effective communication and creation, to practice forms of social participation that are respectful of human rights and dignity through the responsible use of technology”. It is important to stress that digital literacy is not only about learning how to use proprietary tools developed by certain software providers or how to search the internet with a certain search engine – it should be about raising the general understanding of how the internet functions and about how to engage in the this environment in a neutral manner while being able to contribute to the internet as well.
 For the purpose of this report, by the common term of information disorder three phenomena are included that can be differentiated along the line of harm and falseness, as suggested by Wardle and Derakshan (2017, 5):
- Misinformation is when false information is shared, but no harm is meant (e.g. in case of false connections or misleading content (ib.));
- Disinformation is when false information is knowingly shared to cause harm (e.g. in case of false context, imposter content manipulated content or fabricated content (ib.)); 17
- Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere (e.g. in case of leaks, harassment or hate speech (ib.), or by deliberately misinterpreting it or neglecting its context to cause harm).
These forms of ill-content, as we may call them, do not pose risk to only one vulnerable segment of the society, but they represent a risk of harm for the society in general, putting at stake fundamental human rights values, and undermining the public trust in the Internet (EBU 2018).
Prepared by: Annika Linck, European DIGITAL SME Alliance