Artificial Intelligence Archives - WITA http://www.wita.org/nextgentrade-topics/artificial-intelligence/ Mon, 20 Nov 2023 21:16:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 /wp-content/uploads/2018/08/android-chrome-256x256-80x80.png Artificial Intelligence Archives - WITA http://www.wita.org/nextgentrade-topics/artificial-intelligence/ 32 32 Toward International Cooperation on Foundational AI Models /nextgentrade/international-cooperation-on-ai/ Thu, 16 Nov 2023 14:50:43 +0000 /?post_type=nextgentrade&p=40578 An expanded role for trade agreements and international economic policy. The development of artificial intelligence (AI) presents significant opportunities for economic and social flourishing. The release of foundational models such...

The post Toward International Cooperation on Foundational AI Models appeared first on WITA.

]]>
An expanded role for trade agreements and international economic policy.

The development of artificial intelligence (AI) presents significant opportunities for economic and social flourishing. The release of foundational models such as the large language model (LLM) ChatGPT4 in early 2023 captured the world’s attention, heralding a transformation in our approach to work, communication, scientific research, and diplomacy. According to Goldman Sachs, LLMs could raise global GDP by 7 percent and lift productivity growth by 1.5 percent over 10 years. McKinsey found that generative AI such as ChatGPT4 could add $2.6-$4.4 trillion each year over 60 use cases, spanning customer operations, marketing, and sales, software engineering, and R&D. AI is also impacting international trade in various ways, and LLMs bolster this trend. The upsides of AI are significant and achieving them will require developing responsible and trustworthy AI. At the same time, it is critical to address the potential risk of harm not only from conventional AI but also from foundational AI models, which in many cases can either magnify existing AI risks or introduce new ones.

For example, LLMs are trained on data that encodes existing social norms, with all their biases and discrimination. LLMs create risks of information hazards by providing information that is true and can be used to create harm to others, such as how to build a bomb or commit fraud. A related challenge is preventing LLMs from revealing personal information about an individual that is a risk to privacy. Another higher risk from the misuse of LLMs is an increase in the incidence and effectiveness of crime. In other cases, LLMs will increase existing risks of harm, such as from misinformation which is already a problem with online platforms or increase the incidence and effectiveness of crime. LLMs may also introduce new risks, such as risks of exclusion where LLMs are unavailable in some languages.

International cooperation on AI is already happening in trade agreement and international economic forums

Many governments are either regulating AI or planning to do so, and the pace of regulation has increased since the release of ChatGPT4. However, regulating AI to maximize the upsides and minimize the risks of harm without stifling innovation will be challenging, particularly for a rapidly evolving technology that is still in its relative infancy. Making AI work for economies and societies will require getting AI governance right. Deeper and more extensive forms of international cooperation can support domestic AI governance efforts in a number of ways. This includes by facilitating the exchange of AI governance experiences which can inform approaches to domestic AI governance; addressing externalities and extraterritorial impacts of domestic AI governance which can otherwise stifle innovation and reduce opportunities for uptake and use of AI; and finding ways to broaden access globally to the computing power and data needed to develop and train AI models.

Free trade agreements (FTAs), and more recently, digital economy agreements (DEAs) already include commitments that increase access to AI and bolster its governance. These include commitments to cross-border data flows, avoiding data localization requirements, and not requiring access to source code as a condition of market access, all subject to exception provisions that give government the policy space to also pursue other legitimate regulatory goals such as consumer protection and guarding privacy. Some FTAs and DEAs such as the New Zealand-U.K. FTA and the Digital Economy Partnership Agreement include AI-specific commitments focused on developing cooperation and alignment, including in areas such as AI standards and mutual recognition agreements.

With AI being a focus of discussions, international economic forums such as the G7 and the U.S.-EU Trade and Technology Council (TTC), the Organization for Economic Cooperation and Development (OECD), as well as the Forum for Cooperation on Artificial Intelligence (FCAI) jointly led by Brookings and the Center for European Policy Studies as a track-1.5 dialogue among government, industry, and civil society, are important for developing international cooperation in AI. Initiatives to establish international AI standards in global standards development organizations (SDOs) such as the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) are also pivotal in developing international cooperation on AI.

But more is needed—where new trade commitments can support AI governance

These developments in FTAs, DEAs, and in international economic forums, while an important foundation, need to be developed further in order to fully address the opportunities and risks from foundational AI models such as LLMs. International economic policy for foundational AI models can use commitments in FTAs and DEAs and outcomes from international economic forums such as the G7 and TTC as mutually reinforcing opportunities for developing international cooperation on AI governance. This can happen as FTAs and DEAs elevate the output from AI-focused forums and standard-setting bodies into trade commitments and develop new commitments as well. FCAI is another forum to explore cutting-edge AI issues.

Foundational-AI-Models_Meltzer

Joshua P. Meltzer is a senior fellow in the Global Economy and Development program at the Brookings Institution.

To read the executive summary, click here.

To access the full paper PDF, click here.

The post Toward International Cooperation on Foundational AI Models appeared first on WITA.

]]>
The Role of Artificial Intelligence in Managing Postpandemic Supply-Chain Risks /nextgentrade/ai-supply-chain-risks/ Thu, 17 Aug 2023 19:25:22 +0000 /?post_type=nextgentrade&p=39591 In May 2023, the World Health Organization (WHO) declared an end to the COVID-19 pandemic. Despite that announcement, fallout from the pandemic continues to reverberate through global supply chains, exposing...

The post The Role of Artificial Intelligence in Managing Postpandemic Supply-Chain Risks appeared first on WITA.

]]>
In May 2023, the World Health Organization (WHO) declared an end to the COVID-19 pandemic. Despite that announcement, fallout from the pandemic continues to reverberate through global supply chains, exposing their opacity and fragility and catalyzing their transformation. Geopolitical issues, such as Russia’s invasion of Ukraine and rising tensions between the United States and China, have shaped supply-chain transformation, resulting in what I call a “supply-chain iron curtain” that is poised to complicate international trade.

But that does not mean the end of globalization. Rather, it reflects growing regionalization of global supply chains. Companies are deviating from a traditional, cost-driven approach to supply-chain management by making strategic and operational decisions increasingly in light of geopolitical considerations. A strategy known as “friend-shoring” has gained traction among many political and business leaders. Companies use that approach to develop supply-chain resilience by sourcing goods from ideologically compatible countries and regions. Alternative trade strategies include reshoring, which brings manufacturing back from other countries, and nearshoring, which involves sourcing from nearby countries — e.g., Canada and Mexico from the perspective of companies based in the United States.

Artificial intelligence (AI) plays an important role in initiating global supply-chain strategies. Although AI is commonly associated with data analytics, it also has a remarkable ability to handle large volumes of unstructured data in real-time. Instances of unstructured data, such as social-media posts, emails, audiovisual files, and news reports, pose difficulties for traditional data collection and interpretation. Organizations miss out on valuable insights that can improve their supply-chain resilience and efficiency when they ignore such information. This is where AI models come into play.

AI-Infused Supply Chains
Modern AI models are based on deep neural networks that are adept at detecting patterns in vast amounts of unstructured data. By analyzing disparate data sources — e.g., social-media posts indicating unexpected spikes in demand for specific goods and news reports covering local political unrest and natural disasters — AI models can predict potential supply-chain disruptions. As the models adapt and learn, their predictions gain accuracy, enabling businesses to respond quickly and effectively.

In recent years, environmental, social, and governance (ESG) issues have come to the forefront of business operations because regulators, investors, researchers, and customers increasingly demand quantitative measures of sustainability performance from companies. However, ESG measures are meaningless unless they extend beyond a company’s boundaries to include performance across a supply chain. AI can help monitor a company’s carbon footprint throughout its entire supply chain and help ensure labor compliance throughout. AI also can evaluate potential supply partners based on their ESG ratings.

How to interact with an AI-infused supply-chain landscape is an increasingly important consideration. The pandemic catalyzed AI use in many ways by pushing businesses to their breaking points and forcing them to adapt. However, the use of AI in global supply remains in its infancy. Accelerating adoption may require a new catalyst, such as another major disruption or increased awareness of AI’s potential to improve supply-chain resilience and ESG compliance. Growing interest in generative AI (e.g., DALL.E-2 and ChatGPT) also could catalyze further adoption.

Companies need to realize that potential uses for AI go beyond data analysis. AI can help to usher in a new era of resilient, ethical, and sustainable supply-chain management. Integrating AI into supply-chain management is about more than just business survival. It ensures long-term competitiveness for companies in a world where ESG issues and geopolitical instability are at the forefront of both strategic and operational decision-making. Companies that leverage AI will be prepared for the next phase of global supply-chain transitions, gaining a competitive advantage in a world permanently changed by the COVID-19 pandemic.

Tinglong Dai is a professor of operations management and business analytics at Johns Hopkins University.

To read the full blog post, please click here.

The post The Role of Artificial Intelligence in Managing Postpandemic Supply-Chain Risks appeared first on WITA.

]]>
Trusting AI in International Trade — the Road to Failure, or the Future? /nextgentrade/trusting-ai-international-trade/ Fri, 28 Jul 2023 18:34:21 +0000 /?post_type=nextgentrade&p=39592 Lord Waverley dons his techie hat and has a closer look at the potential applications of artificial intelligence… Generative AI is vital to national interest, regional prosperity, and tackling shared...

The post Trusting AI in International Trade — the Road to Failure, or the Future? appeared first on WITA.

]]>
Lord Waverley dons his techie hat and has a closer look at the potential applications of artificial intelligence…

Generative AI is vital to national interest, regional prosperity, and tackling shared global challenges.

It can help to grow economies, quickly and fairly, by identifying the risks entailed in a long-chain transaction or a complex supply chain. So far, so good — but there is no system in place to monitor and pinpoint suspicious global trade patterns. Nor is there any mapping of complex international trade flows, or overall analysis of trading patterns.

Every data point, each statistical analysis and prediction model, must be spot on. Over-reliance on unverified data, or information that is inaccurate or misleading, can have dire consequences. A simple misunderstanding of context can result in AI’s notorious “technological hallucinations”. Errors can multiply through a supply chain, posing risks that can have far-reaching effects on the economy — such as covering up dumping, counterfeiting, or sanctions avoidance.

AI can play a vital role in monitoring compliance, analysing trends, and assessing the impact of policies. It provides transparency and engenders trust and accountability. AI-driven decisions and recommendations produce credible, far-reaching results. It can tell us where to seek proof of reliability, raise red flags, and shed light on previously invisible interconnections of the global economy. It assists in furthering our understanding of the complexities of trade dynamics.

But it’s crucial to see AI for what it is: a tool for augmenting human capabilities, not replacing them. Take this example. Over 200 million bills of lading, crucial papers in international trade, were recently reviewed by the International Centre for Trade Transparency (ICTTM). It found that 13.6 percent contained at least one error. The OECD decrees that 2.5 percent of global trades, and up to 5.8 percent of EU imports, are counterfeit. The documents provide particulars about country-of-origin, product codes and descriptions, quantities, and costs. Certifications, health and safety requirements, regulatory controls, anti-dumping measures, and taxation are all set by the data collected.

Again, any mistake can have dire results.

ICTTM research shows that goods produced with slave labour still appear on international markets; companies are bypassing safety standards by intentionally mislabelling products as requiring no certification. There have been exports of semiconductors pushed through in this way by dubious actors. Traceability becomes more muddied with each step of the transaction.

There’s clear evidence that some offenders re-incorporate in new jurisdictions as soon as they are caught — still selling to the same importers. This basic move, because of the lack of international oversight, makes these actions almost untraceable.

When error, fraud, and counterfeit percentages are multiplied over a complex supply chain many layers deep, the dangers become apparent. These “mistakes” have serious repercussions for society — and can even put lives at risk. There is an enormous, hidden, problem in our global supply chains and individual “empires” of technology have no way of solving it.

Nationally built systems, siloed in their own technological and political kingdoms, are not a suitable response to these problems. Countries inspect a small percentage of imports, and almost no exports. There is no system in place to monitor global trade patterns, no mapping of international trade flows.

And this is where AI can be of use. The international commerce ecosystem is complex, and bots have the capacity to spot macro- and micro-trends across the entire system, rather than just between two trading partners. The fact that we can exercise some control over our interactions with AI is significant. It can help us spot potential threats and zero-in on the primary papers that need closer inspection. It is a tool to identify and chart patterns and act as an early warning system, while keeping faith in the reliability of source materials. Once we know where to look, locating bad actors and verifying documents becomes simpler.

The boundaries of AI are still expanding. Once we are able to recognise global macro trends, we can use it to our advantage. It can shed light on our reliance on specific vendors and suppliers. It can help us to evaluate the economic risks associated with our suppliers, as well as learn how our products fit into global supply networks. With AI, a component that poses a security concern can be identified and rapidly removed from the supply chain. Without it, such problems may remain hidden.

Human and computer error, and intentional fraud in supply chains, can all be distinguished. AI’s potential lets us conduct comprehensive analyses down to the smallest of details, leading us straight back to the original suppliers, buyers, and documents. The goal is for a zero-trust approach in which papers and records are verified and analysed.

Applying AI to international trade provides a workable answer to the growing difficulties and risks associated with internationally integrated markets. By embracing it, we are not advocating for unquestioning faith in an unknown system. We are suggesting its use as a tool to draw focus to specific areas. If we continue to adopt and use AI with a zero-trust, verify-and-confirm methodology, the transparency, accuracy, and efficiency it can bring could become essential in navigating the global commerce system.

Right now, at the intersection of science and business, artificial intelligence presents a once-in-a-generation opportunity. Used wisely, it has the potential to help overcome some entrenched problems. Its potential extends beyond the cutting of human labour or the generation of otherwise unpredictable results. It gives us a new perspective, an analytical tool that could radically alter how we think about international trade. It could help our economies to flourish in ways that are beneficial to all involved.

There’s no tolerance for AI hallucinations here. Precision, clarity, and faith in human scrutiny are front and centre. ESG reporting is becoming the new norm. Interoperability affords legal protection and a process that safeguards SMEs and banks. Collaborative efforts such as Project Perseus bring together technology, finance, and policy to unlock sustainable access for SMEs via data-sharing. This is critical for stakeholders in the business and banking worlds.

Nationally built systems in technological and political silos must be avoided to combat these challenges. Collaborative efforts between nation states would enable a comprehensive understanding of patterns and targeted strategies. Artificial intelligence should be seen as an instrument that shows us the bigger picture of a vast chain over which no single country, or corporate, should ever have total control.

So, where do governments, regulators, and the private sector go from here? Frameworks and processes are in place to deliver success — and the time for theory is over.

Lord (JD) Waverley is Member of the House of Lords of the United Kingdom and Chairman of Capital Finance International (CFI.co).

To read the full commentary, please click here.

The post Trusting AI in International Trade — the Road to Failure, or the Future? appeared first on WITA.

]]>
Adapting the European Union AI Act to Deal with Generative Artificial Intelligence /nextgentrade/european-union-artificial-intelligence/ Wed, 19 Jul 2023 19:26:56 +0000 /?post_type=nextgentrade&p=39593 When the European Commission in April 2021 proposed an AI Act to establish harmonised EU-wide harmonised rules for artificial intelligence, the draft law might have seemed appropriate for the state...

The post Adapting the European Union AI Act to Deal with Generative Artificial Intelligence appeared first on WITA.

]]>
When the European Commission in April 2021 proposed an AI Act to establish harmonised EU-wide harmonised rules for artificial intelligence, the draft law might have seemed appropriate for the state of the art. But it did not anticipate OpenAI’s release of the ChatGPT chatbot, which has demonstrated that AI can generate text at a level similar to what humans can achieve. ChatGPT is perhaps the best-known example of generative AI, which can be used to create texts, images, videos and other content.

Generative AI might hold enormous promise, but its risks have also been flagged up. These include (1) sophisticated disinformation (eg deep fakes or fake news) that could manipulate public opinion, (2) intentional exploitation of minorities and vulnerable groups, (3) historical and other biases in the data used to train generative AI models that replicate stereotypes and could lead to output such as hate speech, (4) encouraging the user to perform harmful or self-harming activities, (5) job losses in certain sectors where AI could replace humans, (6) ‘hallucinations’ or false replies, which generative AI can articulate very convincingly, (7) huge computing demands and high energy use, (8) misuse by organised crime or terrorist groups, and finally, (9) the use of copyrighted content as training data without payment of royalties.

To address those potential harms, it will be necessary to come to terms with the foundation models that underlie generative AI. Foundation models, or models through which machines learn from data, are typically trained on vast quantities of unlabelled data, from which they infer patterns without human supervision. This unsupervised learning enables foundation models to exhibit capabilities beyond those originally envisioned by their developers (often referred to as ‘emergent capabilities’).

The evolving AI Act

The proposed AI Act (European Commission, 2021), which at time of writing is still to be finalised between the EU institutions, is a poor fit for foundation models. It is structured around the idea that each AI application can be allocated to a risk category based on its intended use. This structure largely reflects traditional EU product liability legislation, in which a product has a single, well-defined purpose. Foundation models however can easily be customised to a great many potential uses, each of which has its own risk characteristics.

In the ongoing legislative work to amend the text, the European Parliament has proposed that providers of foundation models perform basic due diligence on their offerings. In particular, this should include:

  • Risk identification. Even though it is not possible to identify in advance all potential use cases of a foundation model, providers are typically aware of certain vectors of risk. OpenAI knew, for instance, that the training dataset for GPT-4 featured certain language biases because over 60 percent of all websites are in English. The European Parliament would make it mandatory to identify and mitigate reasonably foreseeable risks, in this case inaccuracy and discrimination, with the support of independent experts.
  • Testing. Providers should seek to ensure that foundation models achieve appropriate levels of performance, predictability, interpretability, safety and cybersecurity. Since the foundation model functions as a building block for many downstream AI systems, it should meet certain minimum standards.
  • Documentation. Providers of foundation models would be required to provide substantial documentation and intelligible usage instructions. This is essential not only to help downstream AI system providers better understand what exactly they are refining or fine-tuning, but also to enable them to comply with any regulatory requirements.

Room for improvement

These new obligations, if adopted in the final AI Act, would be positive steps, but lack detail and clarity, and would consequently rely heavily on harmonised standards, benchmarking and guidelines from the European Commission. They also risk being excessively burdensome. A number of further modifications could be put in place.

Risk-based approach

Applying all obligations to the full extent to every foundation model provider, both large and small, is unnecessary. It might impede innovation and would consolidate the market dominance of firms that already have a considerable lead in FMs, including OpenAI, Anthropic and Google Deepmind. Even without additional regulatory burdens, it might be very hard for any companies outside of this group to match the resources and catch up with the FM market leaders.

A distinction could therefore be made between systemically important and non-systemically important FMs, with significantly lower burdens for the latter. This would be in line with the approach taken by the EU Digital Services Act (DSA), which notes that “it is important that the due diligence obligations are adapted to the type, size and nature of the … service concerned.” The DSA imposes much more stringent obligations on certain service providers than on others, notably by singling out very large online platforms (VLOPs) and very large online search engines (VLOEs).

There are two reasons for differentiating between systemic and non-systemic foundation models and only imposing the full weight of mandatory obligations on the former. First, the firms developing systemic foundation models (SFMs) will tend to be larger, and better able to afford the cost of intense regulatory compliance. Second, the damage caused by any deviation by a small firm with a small number of customers will tend to be far less than that potentially caused by an SFM.

There are useful hints in the literature (Bommasani et al, 2023; Zenner, 2023) as to criteria that might be used to identify SFMs, such as the data sources used, or the computing resources required to initially train the model. These will be known in advance, as will the amount of money invested in the FM. These pre-market parameters presumably correlate somewhat with the future systemic importance of a particular FM and will likely also correlate with the ability of the provider to invest in regulatory compliance. The degree to which an FM provider employs techniques that facilitate third-party access to their foundation models and thus independent verification, such as the use of open APIs, or open source, or (especially for firms that do not publish their source code) review of the code by independent, vetted experts, might also be taken into account. Other, post-deployment parameters, including the number of downloads, or use in downstream services or revenues, can only be identified after the product has established itself in the market.

Lesser burdens

Notwithstanding the arguments for a risk-based approach, even small firms might produce FMs that work their way into applications and products that reflect high-risk uses of AI. The principles of risk identification, testing and documentation should therefore apply to all FM providers, including non-systemic foundation models, but the rigour of testing and verification should be different.

Guidance, perhaps from the European Commission, could identify what these reduced testing and verification procedures should be for firms that develop non-systemic foundation models. Obligations for testing, analysis, review and independent verification could be much less burdensome and intensive (but not less than reasonably stringent) for providers of non-systemic FMs.

This kind of differentiation would allow for a more gradual and dynamic regulatory approach to foundation models. The list of SFMs could be adjusted as the market develops. The Commission could also remove models from the list if they no longer qualify as SFMs.

Use of data subject to copyright

Even though the 2019 EU Copyright Directive provides an exception from copyright for text and data mining (Article 4(1) of Directive 2019/790), which would appear in principle to permit the use of copyrighted material for training of FMs, this provision does not appear in practice to have resolved the issue. The AI Act should amend the Copyright Directive to clarify the permitted uses of copyrighted content for training FMs, and the conditions under which royalties must be paid.

Third-party oversight

The question of third-party oversight is tricky for the regulation of FMs. Is an internal quality management system sufficient? Or do increasingly capable foundation models pose such a great systemic risk that pre-market auditing and post-deployment evaluations by external experts are necessary (with protection for trade secrets)?

Given the scarcity of experts, it will be important to leverage the work of researchers and civil society to identify risks and ensure conformity. A mandatory SFM incident reporting procedure that could draw on an AI incident reporting framework under development at the Organisation for Economic Co-operation and Development might be a good alternative.

Internationally agreed frameworks

Internationally agreed frameworks, technical standards and benchmarks will be needed to identify SFMs. They could also help document their environmental impacts.

Until now, the development of large-scale FMs has demanded enormous amounts of electricity and has the potential to create a large carbon footprint (depending on how the energy is sourced). Common indicators would allow for comparability, helping improve energy efficiency throughout the lifecycle of an SFM.

Safety and security

Providers of SFMs should be obliged to invest heavily in safety and security. Cyberattacks on cutting-edge AI research laboratories pose a major risk; nonetheless, and despite rapidly growing investments in SFMs, the funding for research in AI guardrails and AI alignment is still rather low. The internal safety of SFMs is crucial to prevent harmful outputs. External security is essential, but it alone will not be sufficient – the possibility of bribes in return for access to models should be reduced as much as possible.

Conclusion

The EU is likely to be a major deployer of generative AI. This market power may help ensure that the technology evolves in ways that accord with EU values.

The AI Act is potentially ground-breaking but more precision is needed to manage the risks of FMs while not impeding innovation by smaller competitors, especially those in the EU. Unless these issues are taken into account in the finalisation of the AI Act, there is a risk of significantly handicapping the EU’s own AI developers while failing to install the adequate safeguards.

To read the full analysis by Bruegel, please click here.

J. Scott Marcus is a Senior Fellow at Bruegel, a Brussels-based economics think tank, and also works as an independent consultant dealing with policy and regulatory policy regarding electronic communications. His work is interdisciplinary and entails economics, political science / public administration, policy analysis, and engineering.

The post Adapting the European Union AI Act to Deal with Generative Artificial Intelligence appeared first on WITA.

]]>
Artificial Intelligence and International Trade: Some Preliminary Implications /nextgentrade/artificial-intelligence-international-trade/ Fri, 22 Apr 2022 04:00:53 +0000 /?post_type=nextgentrade&p=36067 Artificial intelligence (AI) has strong potential to spur innovation, help firms create new value from data, and reduce trade costs. Growing interest in the economic and societal impacts of AI...

The post Artificial Intelligence and International Trade: Some Preliminary Implications appeared first on WITA.

]]>
Artificial intelligence (AI) has strong potential to spur innovation, help firms create new value from data, and reduce trade costs. Growing interest in the economic and societal impacts of AI has also prompted interest in the trade implications of this new technology. While AI technologies have the potential to fundamentally change trade and international business models, trade itself can also be an important mechanism through which countries and firms access the inputs needed to build AI systems, whether goods, services, people or data, and through which they can deploy AI solutions globally. This paper explores the interlinkages between AI technologies and international trade and outlines key trade policy considerations for policy makers seeking to harness the full potential of AI technologies.

1. Introduction


Artificial intelligence (AI) is widely considered to be a general-purpose technology with a strong potential to spur innovation, help firms create new value from data, and reduce trade costs (Agrawal, Gans and Goldfarb, 2017). Broadly defined, AI is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” (OECD, 2019). AI uses data to train algorithms and often co-exists with software that can be embedded in hardware such as robots, autonomous cars or appliances based on Internet of Things (IoT). Examples of AI application include smart assistants, translation, self-driving cars, medical diagnosis and robotics. Today, AI is used in many sectors ranging from precision agriculture (Forbes, 2019) to manufacturing (McKinsey, 2019) (Li, Hou and Yu, 2017) and services (Huang and Rust, 2018).

Growing interest in the economic and societal impacts of AI is being matched by growing interest in the issues at the intersection of trade policy and AI (Lee-Makiyama, 2018; Irion and Williams, 2019; Goldfarb and Trefler, 2018). While the larger regulatory and policy environment around AI (e.g. security, privacy, etc.) continues to evolve, it is important to also think about the issues that are specific to trade and AI. This is also important in the context of the current trade policy deliberations, including in the Joint Statement Initiative on e-commerce discussed at the WTO or in regional trade agreements.

Previous OECD work has outlined the profound implications that digitalisation has had for trade and market openness, as well as how policy makers approach trade in goods and services in the context of rapid technological developments (López González and Ferencz, 2018). The COVID-19 pandemic further accelerated the digital transformation underscoring the importance of digital trade for mitigating the economic slowdown and speeding up recovery (OECD, 2020). This paper focuses on the specific applications for AI with a view to helping policy makers better understand the benefits and challenges that AI brings for trade and to outline key trade policy considerations for harnessing the full potential of this technology.

The paper begins with a brief description of AI technologies and shows what existing data can tell us about the adoption and proliferation of AI. This is followed by a deeper discussion on the policy issues at the intersection of trade and AI: looking at what AI means for trade and what trade means for AI. Three case studies then discuss specific applications of AI technologies in international trade. The last section provides some concluding remarks.

The paper is intentionally short and focused on the broad issues that might be worthy of consideration by trade policy-makers. The field of AI is fast evolving and has many different important, and indeed contentious, facets. This paper aims to provide an initial framework for thinking about the implications of AI for trade without delving into a number of important but broader regulatory concerns which are being discussed in the context of the work of the OECD Science Technology and Innovation Directorate.

Janos Ferencz is a trade policy analyst at the Trade in Services Division of the Organisation for Economic Co-operation and Development (OECD).

Javier López González is a Senior Economist at the Trade and Agriculture Directorate of the OECD.

Irene Oliván is a trade policy analyst at the Trade in Services Division of the OECD.

To read the full policy paper, please click here.

The post Artificial Intelligence and International Trade: Some Preliminary Implications appeared first on WITA.

]]>
Alliance power for cybersecurity /nextgentrade/alliance-power-for-cybersecurity/ Tue, 04 Aug 2020 15:37:16 +0000 /?post_type=nextgentrade&p=22491 There is only one internet and only one cyberspace connecting individuals, enterprises, and nations all over the world. Even more frequently, this shared space is coming under attack from malicious actors,...

The post Alliance power for cybersecurity appeared first on WITA.

]]>
There is only one internet and only one cyberspace connecting individuals, enterprises, and nations all over the world. Even more frequently, this shared space is coming under attack from malicious actors, both state and non-state, who are seeking to exploit cyberspace’s shared infrastructure for their own ends. Addressing cybersecurity threats is, therefore, an international problem that requires an international solution. But given the myriad of threats faced in the cyber domain and the ambiguous borders that exist there, how can states best address these challenges and ensure the safety of their own networks and people? 

In this new report from the Scowcroft Center’s Transatlantic Security Initiative, Cyber Statecraft Initiative senior fellow Kenneth Geers argues that the best way for democratic states to defend their own cyber networks is to leverage the multinational strength of political and military alliances like NATO and the European Union. Alliances like NATO give democracies an advantage over their authoritarian rivals by providing already established mechanisms for multinational cooperation. Alliances are therefore better equipped to tackle the inherently international challenges of cybersecurity.  

To illustrate the impact of alliances on cybersecurity, Geers uses events in Ukraine as a case study, comparing the Ukrainian government’s efforts to defend against Russian cyberattacks shortly after the 2014 revolution with measures taken in cooperation with partners to defend the 2019 presidential election. Geers illustrates how collective action in 2019 produced improved security outcomes compared to efforts taken by Ukraine alone. Building on these lessons, Geers argues that the only structures likely to produce tangible results in cybersecurity are those within political and military alliances. Indeed, the only credible cyber superpower is a robust alliance. The report then offers a series of recommendations on how NATO and the EU can promote trust and collaboration among Allies and partners to build a more effective cyber alliance.  

Alliance-Power-for-Cybersecurity_Geers

To view the full report at Atlantic Council, please click here

 

The post Alliance power for cybersecurity appeared first on WITA.

]]>
The Impact of COVID-19 on the Future of Advanced Manufacturing and Production: Insights from the World Economic Forum’s Global Network of Advanced Manufacturing Hubs /nextgentrade/the-impact-of-covid-19-on-the-future-of-advanced-manufacturing-and-production-insights-from-the-world-economic-forums-global-network-of-advanced-manufacturing-hubs/ Thu, 04 Jun 2020 23:31:38 +0000 /?post_type=nextgentrade&p=21000 While powerful megatrends like global trade tensions, climate change, new technology innovations, and the current COVID-19 crisis impact all parts of the globe, the reality of those impacts – and...

The post The Impact of COVID-19 on the Future of Advanced Manufacturing and Production: Insights from the World Economic Forum’s Global Network of Advanced Manufacturing Hubs appeared first on WITA.

]]>

While powerful megatrends like global trade tensions, climate change, new technology innovations, and the current COVID-19 crisis impact all parts of the globe, the reality of those impacts – and therefore the necessary responses to them – are inherently driven by unique regional characteristics and the regional enabling environments. The Global Network of Advanced Manufacturing Hubs (AMHUBs) connects regional manufacturing ecosystems to help rapidly transform manufacturing to keep pace with the global megatrends that might otherwise create disruptions for manufacturers around the globe.

With the arrival of the coronavirus pandemic, there is a need for the industry to move faster than ever to support the response to this international health crisis while mitigating its impact on manufacturers and their respective supply chain networks around the globe. This paper reflects an aggregate of voices from the Global Network of AMHUBs and focuses on COVID-19’s impact in each region; response efforts from manufacturing and governments; and best practices to achieve rapid results and mitigate repercussions to subsequent regions by learning from those affected earlier. The World Economic Forum is committed to enabling and amplifying cross-AMHUB collaborations that accelerate the industry’s ability to adapt to the current crisis while ensuring future resilience through advanced manufacturing technologies and processes.

WEF_AMHUB_Insight_Paper_2020

To read the full report, please click here

 

 

The post The Impact of COVID-19 on the Future of Advanced Manufacturing and Production: Insights from the World Economic Forum’s Global Network of Advanced Manufacturing Hubs appeared first on WITA.

]]>
The U.S. and EU should base AI regulations on shared democratic values /nextgentrade/the-u-s-and-eu-should-base-ai-regulations-on-shared-democratic-values/ Mon, 02 Mar 2020 16:51:56 +0000 /?post_type=nextgentrade&p=19665 Artificial intelligence (AI) is transforming how economies grow, generate jobs, and conduct international trade. The McKinsey Global Institute estimates that AI could add around 16 percent, or $13 trillion, to...

The post The U.S. and EU should base AI regulations on shared democratic values appeared first on WITA.

]]>
Artificial intelligence (AI) is transforming how economies grow, generate jobs, and conduct international trade. The McKinsey Global Institute estimates that AI could add around 16 percent, or $13 trillion, to global output by 2030.

This makes AI a crucial piece of policy concerning digital trade, data flows, and their implications for additional policy issues such as cybersecurity, privacy, consumer protection, and the broader economic impacts of access to data and digital technologies. Governments around the world are responding with plans to promote research and development, AI investment, and trade.

Last week, the European Commission (EC) published a white paper on AI and a data strategy as part of a plan for “shaping Europe’s digital future,” carrying out President Ursula von der Leyen’s objective to coordinate an approach to AI. The paper recognizes that the EU lags China and the U.S. in AI investment, development, and data resources, but sees the EU’s strong manufacturing sector as an opportunity for EU leadership in AI.

The paper outlines the need for EU leadership in developing an “ecosystem of excellence” by mobilizing resources for research and innovation in AI, with the aim of attracting over €20 billion annually for AI over the next decade. It also identifies the need to develop an “ecosystem of trust” by putting in place a regulatory framework that gives citizens, companies, and public organization confidence in using AI.

This could include new EU regulation to address cybersecurity risk from AI, improve understanding of how decisions using AI are made, and expand consumer protection regulation to AI services. The EU is also focusing on the need to create European data spaces that can facilitate the use and sharing of data by business and government.

In January 2020, the White House proposed 10 AI regulatory principles to govern the development and use of AI technologies in the private sector. Some of the principles resonate with the EU’s objectives. Direction to federal agencies to avoid regulation that unnecessarily hampers AI innovation and growth could apply to the EU’s white paper drafting as much as U.S. agencies.

If so, the white paper is likely to raise eyebrows in the U.S. In general, the white paper adopts a “risk-based” approach to AI regulation. For sectors and applications that are deemed “high-risk,” the EC outlined an approach that may include setting standards for the quality of AI systems and the likelihood of conformity assessments that could include testing and certification.

The white paper also declared that the EU “will continue to cooperate with like-minded countries, but also with global players.” Overlapping principles between the EC and U.S. announcements offer a basis for such cooperation with the United States. The White House principles include public trust in AI, the costs and risks of AI, and the impact of AI on fairness, discrimination, and security of information as well as on privacy, individual rights, autonomy, and civil liberties.

These resemble seven key requirements identified by an EU High Level Group of Experts on AI that are incorporated into the white paper: human agency and oversight, technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability.

In fact, the U.S. and the EU both agree on the need for AI regulation, the key challenge will be doing it in way that is effective and prevents unnecessary barriers to transatlantic trade and investment.

The white paper is clear that EU AI regulation will need to apply to all economic operators providing AI-enabled products and services. In other words, all investment and trade with the EU will need to be consistent with EU AI regulation, which given the potential widespread uptake of AI could include significant amounts of trade and investment in goods and services.

 

To view the full blog, click here.

The post The U.S. and EU should base AI regulations on shared democratic values appeared first on WITA.

]]>
The European Commission considers new regulations and enforcement for “high-risk” AI /nextgentrade/the-european-commission-considers-new-regulations-and-enforcement-for-high-risk-ai/ Wed, 26 Feb 2020 14:43:39 +0000 /?post_type=nextgentrade&p=19539 Last week, the European Commission (EC) released a white paper that seeks to ensure societal safeguards for “high-risk” artificial intelligence (AI). The number of large-scale and highly influential AI models...

The post The European Commission considers new regulations and enforcement for “high-risk” AI appeared first on WITA.

]]>
Last week, the European Commission (EC) released a white paper that seeks to ensure societal safeguards for “high-risk” artificial intelligence (AI). The number of large-scale and highly influential AI models is increasing in both the public and private sector, and so the EC is seriously considering where new regulations, legislative adjustments, or better oversight capacity might be necessary.

These models affect millions of people through critical decisions related to credit approval, insurance claims, health interventions, pre-trial release, hiring, firing, and much more. While facial recognition, autonomous weapons, and artificial general intelligence tend to dominate the conversation, the debate on regulating more commonplace applications is equally important.

The new white paper echoes the principles of the earlier AI Ethics Guidelines: non-discrimination, transparency, accountability, privacy, robustness, environmental well-being, and human oversight. This new paper goes beyond many prior AI ethics frameworks to offer specific AI regulatory options. Some of these options would be alterations to existing EU law, such as ensuring product liability law can be applied to AI software and AI-driven services.

More noteworthy, however, is the proposal to consider entirely new requirements on high-risk AI applications. The high-risk categorization is limited to specific use-cases within specific sectors where there are particularly large stakes. The report explicitly names sectors such as transportation, healthcare, energy, employment, and remote biometric identification, but others like financial services could be included.

Within these sectors, only especially impactful AI applications would receive the label “high-risk” and accompanying oversight. So, while a healthcare allocation algorithm may be included, a hospital’s AI-enabled scheduling software would probably not qualify.

The report details a series of possible oversight mechanisms for applications deemed high-risk AI. Some of these would set standards for the use of AI, such as using representational training data and meeting defined levels of model accuracy and robustness. Others require storage of data and documentation, potentially enabling government auditing of AI models. Transparency measures are also under consideration.

These might require reporting to regulatory authorities (e.g. an analysis of bias for protected classes) or directly to consumers affected by the model (e.g. an individualized explanation for their model outcome). Not all these requirements would apply to all high-risk AI, but instead some subset of these mechanisms would be paired with each high-risk application.

In weighing how these mechanisms might work, it’s valuable to contemplate how various interventions might affect prominent instances of AI harms. For instance, would enabling audits slow the proliferation of pseudoscientific hiring software across human resources departments? Would reporting requirements help identify discriminatory patient treatment in healthcare allocation algorithms?

Would a more rigorous testing process of Tesla’s autonomous driving have made them more resistant to the stickers that trick the vehicles into driving at dangerous speeds? These are questions that the EC paper is raising—questions that the U.S. policy-makers should be asking, too. Given a type of algorithm being used for a particular high-risk purpose, what oversight mechanisms might ensure that it functions in a legal and ethical way?

While the EC paper is exploring new requirements, it also makes clear that enforcing extant law is difficult due to the complexity and opacity of AI. It takes specific expertise in programming and statistics to evaluate the fairness and robustness of AI models, which regulatory agencies across the EU may not yet have.

This is very likely an issue in the United States, too. AI models can easily run afoul of many federal requirements, such as the Civil Rights Acts, the Americans with Disabilities Act, the Fair Credit Reporting Act, the Fair Housing Act, and financial modeling regulations. It is not clear that U.S. regulatory agencies are staffed to handle this emerging challenge.

The EC paper notes that investing in its ability to enforce AI safeguards has real advantages for industry, too. The European approach argues that responsible regulation will build public trust in AI, allowing companies to build automated systems without losing the confidence of their customers. Broadly speaking, the EC’s perspective is positive about the emergence of AI as a general-purpose technology.

It presents AI as a powerful tool to improve scientific research, drive economic growth, and make public services more efficient. They seek to attract €20 billion ($21.7 billion USD) in annual funding for AI, some of which would come from expanding EU spending. This effort would also be bolstered by an ambitious strategy to incentivize data sharing and expand access to cloud infrastructure.

 

To view the full blog, click here.

The post The European Commission considers new regulations and enforcement for “high-risk” AI appeared first on WITA.

]]>
Integrating AI in U.S.-UK Digital Trade Through Technical Standards Cooperation: Financial Services, Cars, and Pharmaceuticals /nextgentrade/integrating-ai-in-u-s-uk-digital-trade-through-technical-standards-cooperation-financial-services-cars-and-pharmaceuticals/ Mon, 24 Feb 2020 14:38:38 +0000 /?post_type=nextgentrade&p=19608 The Atlantic Council hosted Confederation of British Industry (CBI) Director-General Dame Carolyn Fairbairn in Washington, D.C. on February 5, 2020 for a discussion about the UK’s global trading future post-Brexit....

The post Integrating AI in U.S.-UK Digital Trade Through Technical Standards Cooperation: Financial Services, Cars, and Pharmaceuticals appeared first on WITA.

]]>
The Atlantic Council hosted Confederation of British Industry (CBI) Director-General Dame Carolyn Fairbairn in Washington, D.C. on February 5, 2020 for a discussion about the UK’s global trading future post-Brexit. Dame Carolyn was supportive of the UK pursuing a new free trade agreement with the US that would include new standards for tech including ecommerce, fin-tech, and artificial intelligence (AI).

She suggested that the OECD AI Principles would be a good place to start with respect to operationalizing high AI standards in a U.S.-UK trade deal. There is a lot to be said for this approach, particularly in making Principle 2.5 c) a reality: c): “Governments should promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI.”

This also makes sense because neither the United States nor the United Kingdom are likely going to want to do away with the idea that market access commitments in trade agreements should be technologically neutral, i.e. that if a country commits to open up the market in a given sector, that sector should be open no matter what technology is used to serve that sector.

Cooperating on standards development can, however, have the effect of stimulating the use of innovative technologies such as AI, which is a worthwhile goal.

Standards Cooperation does not Mean Countries Must have Identical Laws

Laws, regulations, and standards are sometimes conflated, which occasionally leads to confusion. The European Center for Standardization defines a standard as “a technical document designed to be used as a rule, guideline or definition. It is a consensus-built, repeatable way of doing something.” The National Institute of Standards (NIST) provides examples of AI standards areas such as:

  • Data sets in standardized formats, including metadata for training, validation and testing of AI systems
  • Tools for capturing and representing knowledge and reasoning in AI systems
  • Full documented use cases providing information re: specific AI applications and guides for making decisions about when to deploy AI systems
  • Benchmarks to drive AI innovation
  • Testing methodologies
  • Metrics to quantifiably measure and characterize AI technologies
  • AI testbeds
  • Tools for accountability and auditing

For instance, AI systems often require safeguarding Personally Indentifiable Information (PII) data. This International Organization of Standards (ISO) standard (ISO/IEC 29101:2013) defines a privacy architecture framework for entities that process such data. It does not specify what the definition of PII data is (that is a country’s sovereign right to determine), only how to create ICT systems to protect such data.

Technical standards cooperation could pay dividends if companies could use standards (methodologies) accepted by regulators on both sides of the Atlantic to demonstrate how transparency, bias avoidance, privacy protection and other regulatory priorities are being addressed from a technical standpoint.

We are really talking about developing common methodologies (technical standards) to achieve certain objectives such as the protection of privacy, not substantive legal/regulatory convergence. And we are not talking about “checklists” either because the idea is that companies establish ongoing processes, not a checklist of compliance for a certain date in time.

Composition of U.S.-UK Trade

In this context, understanding the composition of U.S.-UK trade and recalling the most modern trade agreement in existence from a digital standpoint – the United States Mexico Canada Agreement (USMCA) – is a good place to start. The United States Trade Representative (USTR) notes that in 2018, the U.S. goods and services trade with the UK totaled roughly $261.9 billion.

For both countries, trade in financial services, cars and pharmaceuticals is significant. From an AI promotion standpoint, honing in on these sectors could potentially allow for the two countries to do some innovative things in a trade agreement. The USMCA Digital Trade Chapter’s Chapter 19:14 says that the Parties “shall endeavor” to cooperate on a range of issues important for digital trade.

A U.S.-UK trade deal should ideally delineate areas where the U.S. and the UK “shall” cooperate. It might also be worthwhile for the U.S. and the UK to explore whether USMCA Chapter 11 commitments with respect to Technical Barriers to Trade might be worthwhile considering in the U.S.-UK context.

Financial Services: Are Robo-Advisors, Use of Public Records, and Alternative Data Ripe for Cooperation?

The USMCA’s Chapter 17 covers financial services and provides for a point of departure in thinking about what a U.S.-UK deal might look like with respect to financial services. For example, chapter 17:7 provides for commitment with respect to “New Financial Services.” What this means is that if one Party permits a new financial service to be offered in its territory, then it must allow the other two parties to offer the same new financial service. See below for the text of this provision:

Each Party shall permit a financial institution of another Party to supply a new financial service that the Party would permit its own financial institutions, in like circumstances, to supply without adopting a law or modifying an existing law.5 Notwithstanding Article 17.5.1(a) and(e) (Market Access), a Party may determine the institutional and juridical form through which the new financial service may be supplied and may require authorization for the supply of the service.

If a Party requires a financial institution to obtain authorization to supply a new financial service, the Party shall decide within a reasonable period of time whether to issue the authorization and may refuse the authorization only for prudential reasons. There is a “like circumstances” caveat, as well as scope for the Parties to “determine the institutional and juridical form through which the new financial service may be supplied.”

The U.S. and the UK may want to consider areas where the two sides might want to consider mutual recognition regimes of some kind. There has been a lot of discussion, for instance, regarding how to regulate financial advisory services “robo-advisors.” See this LEXOLOGY piece, for instance, on how regulators in the U.S., the UK, Europe, Canada and Hong Kong are dealing with this issue.

Michel Girard’s January 2020 Paper entitled: “Standards for Digital Cooperation” provides some good ideas for what might be possible in this and other sectors. He notes, for instance, that the report from a 2018 High-Level Panel on Digital Cooperation proposes new data governance technical standards to address gaps such as the creation of audits and certification schemes to monitor compliance of AI systems with technical and ethical standards. I have also written about how explanations and audits can enhance trust in AI. 

Another example where closer U.S.-UK cooperation might be warranted is in the area of know your customer (KYC) and anti-money laundering (AML) services. Although to date, trade agreements have appropriately not entered into detail regarding what a privacy law should look like (the Comprehensive and Progressive Agreement for Trans-Pacific Partnership and USMCA only say that Parties shall have a privacy system), it might be worth clarifying that privacy law should not be an impediment to the provision of these essential services.

In practice this would mean that the “right to be forgotten” laws would have to be appropriately tailored and that companies would continue to be able to use public records and wide distributed media to provide high quality KYC and AML services. Alternative data is another area where the U.S. and the UK might want to step up collaboration. For example, the U.S. and investment industries could potentially benefit from greater use of voluntary alternative data standards.

Standards that have the effect of improving data documentation; raising data quality; unifying data pipeline management; reducing time spent on data delivery and ingestion; easier permissions management and authentication; and, simplifying vendor due diligence and contracting would be a good thing. Export Britain actually advises UK firms to focus on , among other sectors, financial services in exporting to the United States.

The same is undoubtedly true for U.S. financial services firms looking to expand in the UK. It may make sense for regulators on both sides of the Atlantic to work together to promote the use of alternative data standards for the investment industry. There is perhaps also scope to work together on the use of alternative data in making consumer credit decisions.

There is substantial evidence suggesting that the use of alternative data in credit scoring can help in expanding service to underserved markets as these comments to the Consumer Financial Protection Bureau (CFPB) make clear. Common U.S.-UK alternative data standards could be helpful, particularly if they are coupled with safeguards to ensure that alternative data can be developed through access to public records and widely distributed media in both the United States and the United Kingdom.

Cars – Can the United States and the United Kingdom Drive Connectedness?

On January 8, 2020, the Trump Administration released “Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0 AV 4.0). Clearly, going forward this will be a U.S. strength given the investments being made by U.S. tech firms. But there is plenty of potential interest in the UK as well as this 2019 Society of Motor Manufacturers and Traders (SMMT) report notes. One of the report’s recommendations is to harmonize international harmonization of regulations.

And as this AV Investor Tracker report establishes, concerns about data privacy are holding back the development of the sector. This Booz Allen Hamilton White Paper delineates some of the issues at stake. Perhaps one of the ways to help U.S. and UK carmakers would be to take what is relevant from the U.S. National Institute of Standards (NIST) Privacy Framework in creating “privacy by design” for AV manufacturers in the UK and the U.S.

On the UK side there has been plenty of preparatory thinking about the privacy issues surrounding AVs – particularly what to do about location data. See this piece, for instance, entitled: “Where your data is being driven.” The Center for Connected & Autonomous Vehicles has done innovative work in this space. Perhaps U.S. and UK negotiators could agree on how privacy can be addressed through mutually agreed upon privacy by design standards for car manufacturers and the apps that will have increasing value add in automobiles.

On February 14, 2019 the United States and the United Kingdom signed a Mutual Recognition Agreement (MRA) with respect to standards. One of the stated purposes of the agreement is to promote trade between the two countries. The agreement at this time focuses on mutual recognition with respect to telecoms equipment, electromagnetic compatibility, and pharmaceutical good manufacturing practices. Perhaps there might be scope to expand this to AVs and other standards important to innovative digital industries?

Making the Most of AI to Make Drug Discovery Cheaper and Quicker

There is a lot of excitement about the potential for AI to help with drug discovery but there is arguably a need for standards to realize the potential of the technology. AI Startup Entrepreneur and Ph.D. Charles K. Fisher actually asks the FDA to develop such standards. Why not work with the UK equivalents to do precisely that together with NIST and UK equivalents?

The Confidentiality Coalition (the Coalition is composed of a range of different healthcare industry players, including pharmaceutical companies) submitted a January 14, 2019 letter to NIST requesting that it work on a privacy framework that is protective of privacy but at the same time allows for needed healthcare data to go to where it is needed. One of the Coalition’s requests is for the Privacy Framework to be consistent with HIPAA and other existing Privacy Frameworks.

The NIST Privacy Framework does not explicitly establish a system to comply with specific laws. And, for instance, with respect to international transfers of clinical trial data, there are some differences between HIPAA and the GDPR as this article notes. Although the NIST is appropriately careful to note that its cybersecurity and privacy frameworks are not “checklists,” it might be helpful, especially given the 2019 MRA to select some sectors where additional guidance might be useful such as healthcare.

After all, in 2018 the United States imported about $5 billion in pharmaceuticals from the United Kingdom. In 2016, the United States exported about $2.5 billion to the UK in medical and pharmaceutical products. Despite these seemingly impressive numbers though, it is hard to think of a sector more in need of a revolution for a changed innovation model. And besides the economics, AI-driven enhanced drug discovery clearly has potential to help people in the way that matters most: improving health outcomes through faster development of new drugs.

In this context, given the politics surrounding healthcare, it is worthwhile underscoring that this technical standards cooperation is about ensuring high quality as well as efficiency, and that it has nothing to do with healthcare delivery models that that United States and the United Kingdom choose. The UK can keep the NHS. And the U.S. can keep its largely private insurance-based system.

Conclusion

The U.S. is putting its money where its mouth is in that federal money is being prioritized for AI R&D. The UK is also a strong AI adopter and leader. And the countries are partners that share similar values. Let’s make the most of these strengths and develop a trade deal that promotes AI-driven innovation.

 

To view the full blog, click here.

The post Integrating AI in U.S.-UK Digital Trade Through Technical Standards Cooperation: Financial Services, Cars, and Pharmaceuticals appeared first on WITA.

]]>