European Union Archives - WITA http://www.wita.org/nextgentrade-topics/european-union/ Thu, 05 Oct 2023 19:30:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 /wp-content/uploads/2018/08/android-chrome-256x256-80x80.png European Union Archives - WITA http://www.wita.org/nextgentrade-topics/european-union/ 32 32 Adapting the European Union AI Act to Deal with Generative Artificial Intelligence /nextgentrade/european-union-artificial-intelligence/ Wed, 19 Jul 2023 19:26:56 +0000 /?post_type=nextgentrade&p=39593 When the European Commission in April 2021 proposed an AI Act to establish harmonised EU-wide harmonised rules for artificial intelligence, the draft law might have seemed appropriate for the state...

The post Adapting the European Union AI Act to Deal with Generative Artificial Intelligence appeared first on WITA.

]]>
When the European Commission in April 2021 proposed an AI Act to establish harmonised EU-wide harmonised rules for artificial intelligence, the draft law might have seemed appropriate for the state of the art. But it did not anticipate OpenAI’s release of the ChatGPT chatbot, which has demonstrated that AI can generate text at a level similar to what humans can achieve. ChatGPT is perhaps the best-known example of generative AI, which can be used to create texts, images, videos and other content.

Generative AI might hold enormous promise, but its risks have also been flagged up. These include (1) sophisticated disinformation (eg deep fakes or fake news) that could manipulate public opinion, (2) intentional exploitation of minorities and vulnerable groups, (3) historical and other biases in the data used to train generative AI models that replicate stereotypes and could lead to output such as hate speech, (4) encouraging the user to perform harmful or self-harming activities, (5) job losses in certain sectors where AI could replace humans, (6) ‘hallucinations’ or false replies, which generative AI can articulate very convincingly, (7) huge computing demands and high energy use, (8) misuse by organised crime or terrorist groups, and finally, (9) the use of copyrighted content as training data without payment of royalties.

To address those potential harms, it will be necessary to come to terms with the foundation models that underlie generative AI. Foundation models, or models through which machines learn from data, are typically trained on vast quantities of unlabelled data, from which they infer patterns without human supervision. This unsupervised learning enables foundation models to exhibit capabilities beyond those originally envisioned by their developers (often referred to as ‘emergent capabilities’).

The evolving AI Act

The proposed AI Act (European Commission, 2021), which at time of writing is still to be finalised between the EU institutions, is a poor fit for foundation models. It is structured around the idea that each AI application can be allocated to a risk category based on its intended use. This structure largely reflects traditional EU product liability legislation, in which a product has a single, well-defined purpose. Foundation models however can easily be customised to a great many potential uses, each of which has its own risk characteristics.

In the ongoing legislative work to amend the text, the European Parliament has proposed that providers of foundation models perform basic due diligence on their offerings. In particular, this should include:

  • Risk identification. Even though it is not possible to identify in advance all potential use cases of a foundation model, providers are typically aware of certain vectors of risk. OpenAI knew, for instance, that the training dataset for GPT-4 featured certain language biases because over 60 percent of all websites are in English. The European Parliament would make it mandatory to identify and mitigate reasonably foreseeable risks, in this case inaccuracy and discrimination, with the support of independent experts.
  • Testing. Providers should seek to ensure that foundation models achieve appropriate levels of performance, predictability, interpretability, safety and cybersecurity. Since the foundation model functions as a building block for many downstream AI systems, it should meet certain minimum standards.
  • Documentation. Providers of foundation models would be required to provide substantial documentation and intelligible usage instructions. This is essential not only to help downstream AI system providers better understand what exactly they are refining or fine-tuning, but also to enable them to comply with any regulatory requirements.

Room for improvement

These new obligations, if adopted in the final AI Act, would be positive steps, but lack detail and clarity, and would consequently rely heavily on harmonised standards, benchmarking and guidelines from the European Commission. They also risk being excessively burdensome. A number of further modifications could be put in place.

Risk-based approach

Applying all obligations to the full extent to every foundation model provider, both large and small, is unnecessary. It might impede innovation and would consolidate the market dominance of firms that already have a considerable lead in FMs, including OpenAI, Anthropic and Google Deepmind. Even without additional regulatory burdens, it might be very hard for any companies outside of this group to match the resources and catch up with the FM market leaders.

A distinction could therefore be made between systemically important and non-systemically important FMs, with significantly lower burdens for the latter. This would be in line with the approach taken by the EU Digital Services Act (DSA), which notes that “it is important that the due diligence obligations are adapted to the type, size and nature of the … service concerned.” The DSA imposes much more stringent obligations on certain service providers than on others, notably by singling out very large online platforms (VLOPs) and very large online search engines (VLOEs).

There are two reasons for differentiating between systemic and non-systemic foundation models and only imposing the full weight of mandatory obligations on the former. First, the firms developing systemic foundation models (SFMs) will tend to be larger, and better able to afford the cost of intense regulatory compliance. Second, the damage caused by any deviation by a small firm with a small number of customers will tend to be far less than that potentially caused by an SFM.

There are useful hints in the literature (Bommasani et al, 2023; Zenner, 2023) as to criteria that might be used to identify SFMs, such as the data sources used, or the computing resources required to initially train the model. These will be known in advance, as will the amount of money invested in the FM. These pre-market parameters presumably correlate somewhat with the future systemic importance of a particular FM and will likely also correlate with the ability of the provider to invest in regulatory compliance. The degree to which an FM provider employs techniques that facilitate third-party access to their foundation models and thus independent verification, such as the use of open APIs, or open source, or (especially for firms that do not publish their source code) review of the code by independent, vetted experts, might also be taken into account. Other, post-deployment parameters, including the number of downloads, or use in downstream services or revenues, can only be identified after the product has established itself in the market.

Lesser burdens

Notwithstanding the arguments for a risk-based approach, even small firms might produce FMs that work their way into applications and products that reflect high-risk uses of AI. The principles of risk identification, testing and documentation should therefore apply to all FM providers, including non-systemic foundation models, but the rigour of testing and verification should be different.

Guidance, perhaps from the European Commission, could identify what these reduced testing and verification procedures should be for firms that develop non-systemic foundation models. Obligations for testing, analysis, review and independent verification could be much less burdensome and intensive (but not less than reasonably stringent) for providers of non-systemic FMs.

This kind of differentiation would allow for a more gradual and dynamic regulatory approach to foundation models. The list of SFMs could be adjusted as the market develops. The Commission could also remove models from the list if they no longer qualify as SFMs.

Use of data subject to copyright

Even though the 2019 EU Copyright Directive provides an exception from copyright for text and data mining (Article 4(1) of Directive 2019/790), which would appear in principle to permit the use of copyrighted material for training of FMs, this provision does not appear in practice to have resolved the issue. The AI Act should amend the Copyright Directive to clarify the permitted uses of copyrighted content for training FMs, and the conditions under which royalties must be paid.

Third-party oversight

The question of third-party oversight is tricky for the regulation of FMs. Is an internal quality management system sufficient? Or do increasingly capable foundation models pose such a great systemic risk that pre-market auditing and post-deployment evaluations by external experts are necessary (with protection for trade secrets)?

Given the scarcity of experts, it will be important to leverage the work of researchers and civil society to identify risks and ensure conformity. A mandatory SFM incident reporting procedure that could draw on an AI incident reporting framework under development at the Organisation for Economic Co-operation and Development might be a good alternative.

Internationally agreed frameworks

Internationally agreed frameworks, technical standards and benchmarks will be needed to identify SFMs. They could also help document their environmental impacts.

Until now, the development of large-scale FMs has demanded enormous amounts of electricity and has the potential to create a large carbon footprint (depending on how the energy is sourced). Common indicators would allow for comparability, helping improve energy efficiency throughout the lifecycle of an SFM.

Safety and security

Providers of SFMs should be obliged to invest heavily in safety and security. Cyberattacks on cutting-edge AI research laboratories pose a major risk; nonetheless, and despite rapidly growing investments in SFMs, the funding for research in AI guardrails and AI alignment is still rather low. The internal safety of SFMs is crucial to prevent harmful outputs. External security is essential, but it alone will not be sufficient – the possibility of bribes in return for access to models should be reduced as much as possible.

Conclusion

The EU is likely to be a major deployer of generative AI. This market power may help ensure that the technology evolves in ways that accord with EU values.

The AI Act is potentially ground-breaking but more precision is needed to manage the risks of FMs while not impeding innovation by smaller competitors, especially those in the EU. Unless these issues are taken into account in the finalisation of the AI Act, there is a risk of significantly handicapping the EU’s own AI developers while failing to install the adequate safeguards.

To read the full analysis by Bruegel, please click here.

J. Scott Marcus is a Senior Fellow at Bruegel, a Brussels-based economics think tank, and also works as an independent consultant dealing with policy and regulatory policy regarding electronic communications. His work is interdisciplinary and entails economics, political science / public administration, policy analysis, and engineering.

The post Adapting the European Union AI Act to Deal with Generative Artificial Intelligence appeared first on WITA.

]]>
The Power of Control: How the EU Can Shape the New Era of Strategic Export Restrictions /nextgentrade/the-power-of-control/ Wed, 17 May 2023 18:19:04 +0000 /?post_type=nextgentrade&p=38046 In January 2023, the United States and two of its closest allies, the Netherlands and Japan, concluded a ground-breaking agreement – but took pains not to draw attention to it,...

The post The Power of Control: How the EU Can Shape the New Era of Strategic Export Restrictions appeared first on WITA.

]]>
In January 2023, the United States and two of its closest allies, the Netherlands and Japan, concluded a ground-breaking agreement – but took pains not to draw attention to it, or even to call it an agreement. They held no press conference and released no joint statement. Yet the subject of their deal goes to the heart of the growing strategic competition between the US and China. And it encapsulates some of the critical challenges facing the European Union at the intersection of international security, the world economy, the technological revolution, and strategic competition.

The agreed non-agreement between the three states pertains to some of the most complex machinery and most miniscule components humankind has ever produced. With their accord, the countries effectively restricted the export to China of the most advanced microchips and the tools to produce them. These items have become a focal point in international power politics because of their use in developing artificial intelligence and their centrality to many of the 21st century’s most important technologies.

As news on the matter emerged, the Dutch prime minister confined his remarks to saying: “Those talks have been going on for a long time and we’re not saying anything about it.” The reason for reticence was clear; in response to their decision, China threatened retaliation against the Netherlands and Japan.

The move followed on from measures unilaterally implemented by the US in October 2022 to restrict the trade of advanced semiconductor technologies with China for reasons of international security. And it now appears that the Dutch national measures could soon be followed by a decision by the German government to restrict the export to China of chemicals needed for chip production.

As these sorts of incidents mount amid the escalating US-China strategic technology competition, the EU and its member states will find themselves increasingly caught in the crossfire. Washington will maintain pressure on its allies to align with its China policy. China’s military build-up will continue to change the balance of power. And Beijing’s willingness and ability to weaponise trade will likely continue to grow – it will no longer be possible for the EU to keep its pursuit of free trade separate from these powerful currents. If a rules-based order is to remain, the rules will need to change to take account of the ways in which economic security forms part of this wider competition.

To steer a course according to its own interests in this new era of strategic trade controls, the EU must urgently develop its own strategy and upgrade its tools to deliver on it. If it is to promote and defend its own sovereignty, it must start to draw its own red lines in technology engagement with China and upgrade its export control policy.

The-Power-of-Control-How-the-EU-can-shape-the-new-era-of-strategic-export-restrictions

Tobias Gehrke is a senior policy fellow at the European Council on Foreign Relations, based in the Berlin office. He leads ECFR’s Geoeconomics Initiative. His area of focus includes economic security, European economic strategy, and great power competition in the global economy.

Julian Ringhof is a policy fellow with the European Power programme at the European Council on Foreign Relations. His research focuses on the implications of digital and emerging technologies for international affairs, including the topics of EU digital diplomacy and EU technological sovereignty.

To read the full policy brief, please click here.

The post The Power of Control: How the EU Can Shape the New Era of Strategic Export Restrictions appeared first on WITA.

]]>
Next Steps for U.S. Digital Leadership: Advancing Digital Governance with the Pacific and Europe /nextgentrade/digital-governance-pacific-europe/ Tue, 10 Aug 2021 19:43:10 +0000 /?post_type=nextgentrade&p=29878 Building on a strong domestic agenda, the Administration’s international objectives include ensuring a worker-centric trade policy, rebuilding partnerships with allies, and developing a strategy to address China’s growing technology challenge....

The post Next Steps for U.S. Digital Leadership: Advancing Digital Governance with the Pacific and Europe appeared first on WITA.

]]>
Building on a strong domestic agenda, the Administration’s international objectives include ensuring a worker-centric trade policy, rebuilding partnerships with allies, and developing a strategy to address China’s growing technology challenge. Leading on global digital governance must be a key component of this agenda.

ALI’s report focuses on next steps to creating a U.S. led global digital governance agenda. As the longer-term process of negotiating a multilateral digital agreement under the World Trade Organization evolves, the U.S. should focus on nearer-term goals in the Pacific and Europe.

A new digital agenda starts with the need to identify policies that are worker-centric. The Administration and Congress are working on a new trade agreement model to put workers at the center, and this focus needs to be part of digital agreements. This includes language covering digital inclusion and access to technology, especially to underserved communities, a focus on small- and medium-sized enterprises (SMEs) and protections for online users.

Second, the U.S. should negotiate a Pacific Digital Agreement to reestablish U.S. engagement in Asia, building on existing regional agreements, which include open and democratic values. This agreement should include a group of five or six key countries in the region, incorporate new worker-centric language, together with existing high standard language from DEPA, DEA and the U.S.-Japan Agreement, and create new norms on ethical AI, facial recognition, and technologies of the future.

Finally, the U.S. should build a coalition of like-minded, technology-democracies to develop a high standard digital governance agenda advancing open and democratic values. The U.S.-EU Tech and Trade Council is a good first step toward this goal. Building this coalition is the most critical element in countering China’s harmful approaches to tech and data governance, and the U.S. has no stronger partner in these values than the EU. However, the two sides will also need to work through digital policy friction, including privacy, taxation, and regulatory approaches like the Digital Markets Act (DMA).

ALI-White_Paper-081121-Final2

Dr. Orit Frenkel is the CEO and co-founder of the American Leadership Initiative. She has 39 years of experience working on Asia, trade, and foreign policy issues. 

Ms. Rebecca Karnak is Director of Digital Projects at the American Leadership Initiative. She is also the Principal and Founder of Woodside Policy LLC.

To read the full report from the American Leadership Initiative, please click here

The post Next Steps for U.S. Digital Leadership: Advancing Digital Governance with the Pacific and Europe appeared first on WITA.

]]>
Antitrust in the United States and the European Union – A Comparative Analysis /nextgentrade/antitrust-united-states-and-eu/ Wed, 02 Sep 2020 14:33:14 +0000 /?post_type=nextgentrade&p=22757 I. Introduction Technological innovation has had a profound impact on the way we live, communicate, and work. The dawn of the Fourth Industrial Revolution has opened immense opportunities but also...

The post Antitrust in the United States and the European Union – A Comparative Analysis appeared first on WITA.

]]>
I. Introduction

Technological innovation has had a profound impact on the way we live, communicate, and work. The dawn of the Fourth Industrial Revolution has opened immense opportunities but also created significant challenges. Questions about cybersecurity, disinformation, and privacy, for example, vex businesses, governments, and private citizens alike. A different set of issues are related to the sheer size, reach, and power of the companies that comprise Big Tech and how to deal with them.

Being a large corporation, and being in the vanguard of a far-reaching and ever-expanding industry, is, by itself, neither good nor bad, but it will often lead to increased scrutiny. In some instances, this might result in attempts to either block certain companies from entering a market, or, alternatively, make it more difficult for them to operate in it. In 2015, for example, President Obama alluded to this when he accused the European Union of digital protectionism in its investigations of American tech companies— “[i]n defense of Google and Facebook, sometimes the European response … is more commercially driven than anything else.” But to chalk scrutiny of large tech companies and their business practices up to mere protectionism would miss the mark. The many benefits of modern technology notwithstanding, there are powerful economic factors within digital markets that limit competition and stifle innovation, and as a result can hurt consumers.

Concerns about Big Tech are also not confined to Europe. In fact, there seems to be a growing consensus in both the United States and the European Union of the need to, at a minimum, explore ways to check certain actions and the broader influence of the largest tech companies.

To be sure, there are differences in how Big Tech is viewed in the United States and Europe. At a basic level, many Europeans are viscerally suspicious of the market and the power of big corporations. This clearly also applies to the tech sector, as evidenced by a poll conducted in the run-up to the European Parliament elections last year. Fully 64 percent of voters thought that the European Union had been too lax in its regulation of U.S. tech giants. By contrast, most Americans believe in the power of the market to self-correct and are warier of government overreach. Whether consciously or not, it is hardly a stretch to assume that these different attitudes inform thinking about competition policy and enforcement decisions on both sides of the Atlantic.

The focus of this article is on single-firm conduct, and the transatlantic divide over how best to use antitrust and competition policy to navigate this new and exciting world. Section 2 looks at what makes Big Tech unique from an antitrust perspective. Section 3 provides an overview of U.S. and EU competition law as it relates to single-firm conduct, as well as their respective institutional structures. Section 4 assumes a more prospective posture, looking at possible future trends and what steps Big Tech can take to protect its own interests in this environment.

To read the full article please click here.

Icarus article

©2020 by the American Bar Association.  Reprinted with permission.  All rights reserved.  This information or any or portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.

Use of the Requested Material is limited to the one-time use as described above, and does not include the right to license this Requested Material, individually or as it appears in your publication, or to grant others permission to photocopy or otherwise reproduce this material.  Permission is granted to make versions for use by blind or physically handicapped persons, provided that no fees are charged.

Use of the Requested Material is granted on a non-exclusive basis and is valid throughout the world in the English language only.

If any material in the Requested Material credits another source, then you must obtain authorization from that original source.

Permission is limited solely to the text portion of the Requested Material. If any photographs, illustrations, cartoons, advertisements, etc. appear in conjunction with the Requested Material, those portions should be blocked out before reproduction, as well as text from other articles.

The reproduction of the ABA logo and/or section logos is strictly prohibited, as is the reproduction of covers and mastheads of ABA publications

The post Antitrust in the United States and the European Union – A Comparative Analysis appeared first on WITA.

]]>
Time To Green EU Trade Policy: But How? /nextgentrade/green-eu-trade-policy/ Mon, 20 Jul 2020 20:25:52 +0000 /?post_type=nextgentrade&p=22994 “Is trade bad for the environment?” is the simple question that was asked on July 11 to the 110 young professionals and students coming from 25 member States, who were participating...

The post Time To Green EU Trade Policy: But How? appeared first on WITA.

]]>
“Is trade bad for the environment?” is the simple question that was asked on July 11 to the 110 young professionals and students coming from 25 member States, who were participating to the Budapest European Agora. 40% of them answered yes. 37% answered no and 23% admitted they do not know. These results highlight the complexity of this relation. Time has come to democratise this debate and to put concrete solutions on the table.

This is all the more necessary that the 2019 elections have resulted in a rebalancing of political forces at the European Parliament which will necessitate to review the trade environment nexus at EU level for several reasons:

• environment protection featured prominently among the political signals sent by the voters;
• it is, by essence, a global public good issue, better dealt with at EU level;
• the EU is seen as having so far exercised a leadership role in this area of global governance;
• trade is one of the few really “federalised” EU competences;
• as such, it remains the main EU lever to influence the global agenda, starting with SDGs.

This is confirmed by noticeable developments since the elections, such as the new President of the Commission declaring that she is in favour of border carbon taxes (a first), or by the growing debate on the preservation of the rainforest that have surfaced as a result of the EU and Mercosur’s agreement reached after 25 years of bilateral trade negotiations.

Even if trade measures are not among the “first best solutions” to tackle environmental degradations, revisiting the EU stance in this area appears, both necessary and urgent, starting with climate change related aspects. This is also true about other issues such as biodiversity or ocean governance. It is a highly complex matter, necessitating deep analytical and technical investigations in several areas, new political debates, and search for operational
and implementable solutions.

To download the full paper, please click here.

190903-PP-EN-Time-to-green-EU-policy-but-how-1

The post Time To Green EU Trade Policy: But How? appeared first on WITA.

]]>
White Paper on Artificial Intelligence: a European approach to excellence and trust /nextgentrade/white-paper-on-artificial-intelligence-a-european-approach-to-excellence-and-trust/ Wed, 19 Feb 2020 16:41:27 +0000 /?post_type=nextgentrade&p=19662 As digital technology becomes an ever more central part of every aspect of people’s lives, people should be able to trust it. Trustworthiness is also a prerequisite for its uptake....

The post White Paper on Artificial Intelligence: a European approach to excellence and trust appeared first on WITA.

]]>
As digital technology becomes an ever more central part of every aspect of people’s lives, people should be able to trust it. Trustworthiness is also a prerequisite for its uptake. This is a chance for Europe, given its strong attachment to values and the rule of law as well as its proven capacity to build safe, reliable and sophisticated products and services from aeronautics to energy, automotive and medical equipment.

Europe’s current and future sustainable economic growth and societal wellbeing increasingly draws on value created by data. AI is one of the most important applications of the data economy. Today most data are related to consumers and are stored and processed on central cloud-based infrastructure. By contrast a large share of tomorrow’s far more abundant data will come from industry, business and the public sector, and will be stored on a variety of systems, notably on computing devices working at the edge of the network.

This opens up new opportunities for Europe, which has a strong position in digitised industry and business-to-business applications, but a relatively weak position in consumer platforms. Simply put, AI is a collection of technologies that combine data, algorithms and computing power. Advances in computing and the increasing availability of data are therefore key drivers of the current upsurge of AI.

Europe can combine its technological and industrial strengths with a high-quality digital infrastructure and a regulatory framework based on its fundamental values to become a global leader in innovation in the data economy and its applications as set out in the European data strategy. On that basis, it can develop an AI ecosystem that brings the benefits of the technology to the whole of European society and economy:

  • for citizens to reap new benefits for example improved health care, fewer breakdowns of household machinery, safer and cleaner transport systems, better public services;

  • for business development, for example a new generation of products and services in areas where Europe is particularly strong (machinery, transport, cybersecurity, farming, the green and circular economy, healthcare and high-value added sectors like fashion and tourism); and

  • for services of public interest, for example by reducing the costs of providing services (transport, education, energy and waste management), by improving the sustainability of products and by equipping law enforcement authorities with appropriate tools to ensure the security of citizens, with proper safeguards to respect their rights and freedoms.

Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection. Furthermore, the impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole.

The use of AI systems can have a significant role in achieving the Sustainable Development Goals, and in supporting the democratic process and social rights. With its recent proposals on the European Green Deal, Europe is leading the way in tackling climate and environmental-related challenges. Digital technologies such as AI are a critical enabler for attaining the goals of the Green Deal.

Given the increasing importance of AI, the environmental impact of AI systems needs to be duly considered throughout their lifecycle and across the entire supply chain, e.g. as regards resource usage for the training of algorithms and the storage of data. A common European approach to AI is necessary to reach sufficient scale and avoid the fragmentation of the single market.

The introduction of national initiatives risks to endanger legal certainty, to weaken citizens’ trust and to prevent the emergence of a dynamic European industry. This White Paper presents policy options to enable a trustworthy and secure development of AI in Europe, in full respect of the values and rights of EU citizens. The main building blocks of this White Paper are:

  • The policy framework setting out measures to align efforts at European, national and regional level. In partnership between the private and the public sector, the aim of the framework is to mobilise resources to achieve an ‘ecosystem of excellence’ along the entire value chain, starting in research and innovation, and to create the right incentives to accelerate the adoption of solutions based on AI, including by small and medium-sized enterprises (SMEs).

  • The key elements of a future regulatory framework for AI in Europe that will create a unique ‘ecosystem of trust’. To do so, it must ensure compliance with EU rules, including the rules protecting fundamental rights and consumers’ rights, in particular for AI systems operated in the EU that pose a high risk.

    • Building an ecosystem of trust is a policy objective in itself, and should give citizens the confidence to take up AI applications and give companies and public organisations the legal certainty to innovate using AI.

    • The Commission strongly supports a human-centric approach based on the Communication on Building Trust in Human-Centric AI and will also take into account the input obtained during the piloting phase of the Ethics Guidelines prepared by the High-Level Expert Group on AI.

The European strategy for data, which accompanies this White Paper, aims to enable Europe to become the most attractive, secure and dynamic data-agile economy in the world – empowering Europe with data to improve decisions and better the lives of all its citizens.

The strategy sets out a number of policy measures, including mobilising private and public investments, needed to achieve this goal. Finally, the implications of AI, Internet of Things and other digital technologies for safety and liability legislation are analysed in the Commission Report accompanying this White Paper.

commission-white-paper-artificial-intelligence-feb2020_en

To view the full report, click here.

The post White Paper on Artificial Intelligence: a European approach to excellence and trust appeared first on WITA.

]]>