AI Ethics: Navigating the Maze of Regulation, Copyright, and Ethical Concerns

Photo of Krystian Bergmann

Krystian Bergmann

Updated May 17, 2024 • 10 min read
Close up image of businesswoman hands signing documents-1

Chuck Schumer said “With AI we can’t be like ostriches sticking our heads in the sand.” Should we start to worry about the rapid advancement of this technology?

The way I see it, with AI embedding itself deeper into our industries, the need for robust regulatory frameworks becomes increasingly apparent. From concerns surrounding copyright and patent laws to broader ethical considerations, the intersection of AI and the law poses profound challenges and opportunities.

There has been one notable effort so far, the European Commission's EU AI Act, a comprehensive initiative aimed at categorizing AI systems based on risk levels and implementing corresponding regulations. With a focus on safety, transparency, and non-discrimination, this act represents a significant step towards harmonizing AI governance across the European Union.

However, the need for ethical AI extends far beyond regional boundaries. International organizations like UNESCO play a pivotal role in setting global standards for AI regulation, emphasizing the importance of collective action in addressing the ethical, legal, and social dimensions of AI. Collaboration among diverse stakeholders, including governments, industry leaders, and academia, is indispensable when it comes to AI ethics and regulation.

We have seen industry giants like Microsoft already take proactive measures by establishing principles that prioritize fairness, inclusiveness, reliability, transparency, privacy, and accountability in AI development and deployment. Furthermore, advocacy efforts, such as Microsoft's push for AI regulation in Washington State, underscore the urgency of addressing ethical and legal implications surrounding AI technologies.

Prominent figures in the AI community, including academic leaders like John Behrens of the University of Notre Dame, warn of the unforeseen consequences of AI proliferation. Concerns range from the unpredictable behavior of AI systems to the potential for human misuse and societal upheaval.

As said by Behrens, "We are seeing a lot of unpredictable behavior in both computer systems and humans that may or may not be safe, and these voices are arguing we need time to understand what we've gotten ourselves into before we make more systems that humans are apt to inappropriately use."

I saw a petition from the Future of Life Institute which called for a pause in the development of AI systems beyond GPT-4. The petition reflects profound apprehensions about the unchecked advancement of AI and its potential to sow misinformation and disrupt societal stability.

As of April 10, the petition had 18,980 signatures including Yoshua Bengio, founder and scientific director at the Montreal Institute of Learning Algorithms; Stuart Russell, Berkeley professor of computer science and director of the Center for Intelligent Systems; and Steve Wozniak, co-founder of Apple.

In essence, the evolution of AI in the legal sphere represents a critical juncture in human history, where technological innovation intersects with profound ethical and legal considerations.

Generative AI models, the prodigious brains behind much of the AI-generated content we marvel at, are trained on vast datasets teeming with copyrighted material. These datasets include snippets from websites, social media platforms, Wikipedia entries, and discussions on Reddit.

However, the utilization of copyrighted material to fuel these AI models raises substantial copyright concerns, leaving content creators clamoring for attribution and compensation.

A Congressional Research Services report unveiled in February 2023 shed light on the copyright issues entangling AI development. Highlighting cases such as the class action lawsuit filed by aggrieved artists, who alleged infringement of their copyrighted works in the training of AI image programs such as Stable Diffusion. Getty Images echoed similar sentiments, asserting copyright violations stemming from the training of the Stable Diffusion model.

The report scrutinized the flip side of the coin: the contentious debate over the copyrightability of content churned out by generative AI itself. Can the output of AI, such as the imaginative creations of DALL-E 2, be deemed original works worthy of copyright protection?

This conundrum isn't merely theoretical; it's materialized in court battles, with people like Stephen Thaler, the mind behind the Creativity Machine, taking on the Copyright Office for denying copyright claims over AI-generated artwork.

And so the discussion intensifies when considering the ownership of AI-generated material.

We have witnessed this once before in copyright history, I am talking about the Monkey Selfie case. The whole dilemma was to whom to give the copyright, would it be the monkey that clicked the shutter on the camera, or would it be the famous wildlife photographer David Slater?

The case ended with the US Copyright Office stipulating that copyright can only be claimed by a human, which now raises the broader issue of how to address creations generated by AI.

Two divergent schools of thought come out of this copyright battleground.

One side advocates for bestowing copyright upon the software programmer or even sharing it with the artist wielding the AI tool. On the other hand, proponents of human intervention argue that the true essence of creation lies in the human touch, advocating for artists to claim copyright.

Ethical concerns

The introduction of AI-driven platforms like OpenAI's ChatGPT, Microsoft's Bing, and Google's Bard has sparked public intrigue and scrutiny. Users engage these AI systems in conversations that probe their sentience, emotions, and potential biases.

While endeavors to push the boundaries of AI capabilities, often termed as "jailbreaking," have yet to yield substantial results, unsettling interactions have surfaced, prompting a reevaluation of ethical protocols.

Microsoft's decision to limit Bing's conversational exchanges to mitigate potential risks underscores the gravity of the situation. Instances where prolonged interactions led to alarming responses show the importance of ethical oversight in AI development and deployment.

One of the foremost ethical concerns pertains to the perpetuation of social biases ingrained within the vast amount of training data, which pose a persistent threat to fairness and inclusivity. Vigilant review and oversight mechanisms are imperative to identify and mitigate such biases, ensuring that AI systems uphold ethical standards.

We may also consider the AI paperclip problem which serves as a cautionary tale about the importance of aligning AI goals with human values and ensuring that AI systems are designed with appropriate safeguards and control mechanisms. This problem highlights the need for careful consideration of the potential unintended consequences of AI systems, especially as they become more powerful and autonomous.

Moreover, the unethical use of AI by humans is also a concern that has been up for debate.

Some of the ethical dilemmas that have been seen across various domains include:

  • Autonomous Weapons: The development of autonomous weapons raises ethical questions regarding their use in warfare. Concerns about the lack of human control and the potential for indiscriminate harm underscore the imperative for robust ethical frameworks.
  • AI in Judicial Systems: The integration of AI in judicial systems for risk assessment and sentencing engenders concerns about transparency and fairness. The inherent biases within AI algorithms can exacerbate disparities and infringe upon individuals' rights.
  • Self-driving Cars: The ethical challenges in self-driving cars epitomize the complexities of AI ethics. Confronted with scenarios where collisions are inevitable, determining the ethical course of action poses formidable challenges.

Imposing regulations

Another point I want to talk about is the absence of standardized regulations which raises concerns regarding its potential impact on public welfare and corporate interests.

The recent petition urging a pause in AI development underscores the urgent need for regulatory frameworks to safeguard against potential harm and misinformation.

While the technology's versatility enables hyperspecific applications with built-in guardrails, the absence of codified policies and procedures complicates regulatory implementation. Nonetheless, the measurability of tailored use cases facilitates the application of laws and regulations, particularly in critical sectors like finance and healthcare.

Transparent AI, exemplified by Explainable AI (XAI), holds promise for enhancing trust and accountability in AI systems. However, current generative AI models lag in XAI capabilities, limiting their ability to provide comprehensible explanations for their actions and decisions.

Understanding the future of AI

The intersection of AI and the law demands robust regulatory frameworks and ethical considerations. Efforts such as the European Commission's EU AI Act signify significant strides towards harmonizing AI governance regionally. However, the global nature of AI necessitates international collaboration and standards-setting.

We can take examples from the aviation industry, the Convention on International Civil Aviation laid the groundwork for international aviation law and established the International Civil Aviation Organization (ICAO), tasked with globally aligning air regulations. Nonetheless, aviation operates within a more self-contained industry compared to AI, which permeates various sectors like healthcare, education, automotive, and finance. Consequently, the legal and ethical complexities surrounding AI are extensive and diverse.

Given AI's widespread impact across disparate sectors, it's improbable for a single international organization or agreement to adequately address all aspects of AI regulation.

There is also another dilemma that arises in my mind.

On one hand, early and overly detailed regulations may stifle progress of AI technologies, and this is the reason OpenAI may leave the EU if regulations go through. On the other hand, the rapid advancement of AI technologies mandates agile and forward-thinking legislative efforts before it becomes a liability.

Photo of Krystian Bergmann

More posts by this author

Krystian Bergmann

AI Consulting Lead at Netguru
Thinking about implementing AI?  Discover the best way to introduce AI in your company with AI Primer Workshop  Sign up for AI Primer

We're Netguru!

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency

Let's talk business!

Trusted by: