Regulative needs for Generative AI | HCLTech
Engineering

Regulations for Responsible GenAI

This blog highlights the need for a proper framework, policy and guidelines for the new age AI applications.
 
5 minutes read
Vineet Sharma

Author

Vineet Sharma
Enterprise Architect
5 minutes read
Share
Regulations for Responsible GenAI

Introduction

Generative Artificial Intelligence (GAI) has been a hot topic among researchers for the last few years. The recent advancement showcased by ChatGPT and DALL-E has brought global recognition to the capabilities of GAI. People across all walks of life were initially amused by the AI proficiency displayed by these heavily trained Large Language Models (LLM) and foundational models. All the media articles and CEOs extensively talked about the potential of these powerful GAIs and how they would automate the various tasks of our day-to-day lives.

But suddenly, this euphoria faded and transformed into widespread anxiety among the public. People began to question if they were ready for this revolution in AI. This blog aims to talk about the need for a proper framework, policy and guidelines for the new-age AI applications.

 

qute-color

Generative AI (GAI) has proven to be a groundbreaking technology. With the recent advancement showcased by ChatGPT and DALL-E.

Share  

 

Background

In recent times, the ChatGPT (Chatting Generative Pre-Trained Transformer), an LLM, was released after being trained with approximately 45 terabytes (TB) of public data. This dataset spanned Wikipedia, research papers, literature, news, history, scientific data and many more sources. It can almost be compared to a child with approximately 17 billion neurons who has not only read but memorized the entire 45TB of data.

The initial objective of training the model with approximately 17 billion parameters was to foster its linguistic understanding and train it to generate sentences in grammatically correct language. However, in the due process of training, it also grasped the non-language information of the training data, like history, poems, authors, general knowledge, science and more. This trained model, equipped with the knowledge of language and other subjects (part of the training data), can not only answer the questions but also generate the answer in its own grammatically correct sentences. Much like humans, it can retrieve information and articulate the reply in a spoken language. Other foundation models like DALL-E can also do these complex operations on images and texts combined. This platform will generate an image based on the textual description.

The 2017 introduction of transformer-based architecture, which both implicitly and explicitly facilitated parallelism, enabled faster training of these deep learning models, thereby paving the way for a model to be trained with billions of parameters. Parameters are a kind of internal variable of the model that helps account for multiple aspects of the training data in the learning process.

ChatGPT (GPT 3.5, GPT 4.0) are not the only models with such complexity and size. Below are a few other open-source LLM models with their size and timelines:

Name Release date Number of parameters
Gopher December 2021 280 billion
LaMDA (Language Models for Dialog Applications) January 2022 137 billion
GPT-NeoX February 2022 20 billion
Chinchilla March 2022 70 billion

(Source: Large language model - Wikipedia)

These models are also trained on large data sets and give similar results as the other commercially available LLM foundation models and are capable of doing complex tasks, including:

  1. Summarization of articles
  2. Draft new articles and blogs
  3. Create new textual and multimedia content
  4. Suggest or create new code for programmers
  5. Answer questions from the documents, reports and more
  6. Create test cases for testing activities
  7. Explain the article, code or steps given an input source

The above information offers a glimpse of how things will change in the next five years when these language and foundation models will become more intelligent. Even though they are artificially intelligent, the factors of intelligence bring them into a race with humans.

Impact on society and humanity:

Until recently, AI was limited to specific tasks with limited real-life effectiveness and would have never been able to challenge humans. This was evident from the example when an investment of approximately 100 billion dollars was made in the autonomous driving sector in the last decade, but no significant progress has been seen so far.

AI/ML was limited to image recognition, predictive maintenance, gaming, recommendation systems and other task-specific applications, but none of them was general or intelligent enough to handle a multitude of tasks like these new transformer-based models.

Now, let’s see how these new generation LLM or foundation models force us to rethink the way we look at these software and machines.

"Humanity is a virtue linked with basic ethics of altruism derived from the human condition. It also symbolizes human love and compassion towards each other." Humans and society have primarily been based on standards referred to as morals and ethics.

These new-generation AI models overlap our space of intelligence, work and the world, and hence need a policy covering moral and ethical frameworks for them.

Some of the hard-pressed issues of this intelligent software are:

  1. Misuse of their capabilities by humans or by themselves
  2. Accountability for their actions
  3. Data privacy
  4. Role of humans and machines in a common space
  5. Disruption of human jobs and roles
  6. Humans competing with machines in the field of arts and creativity

The above list is just indicative — a complete list is outside the scope of this blog.

In the current human framework, misuse of power, position and data is controlled by laws applicable to all adult humans on similar lines across the globe. A proper legal system is in place whereby the ‘person’ who made the decision is charged for the misuse. So, the misuse is clearly defined by the law books. But what about intelligent software? Should they be treated as humans as well?

Accountability: It makes humans responsible for their actions. Intention is the key factor, whether the actor acted knowingly or unknowingly. The terms knowingly and unknowingly cannot be applied when we talk of AI software. This is a new aspect and needs to be defined properly.

Data Privacy: It is of utmost importance in this digital world. Data protection laws have been framed, but when these new-age AI models are trained, they are trained on public data. Training on custom or closed data shall result in a model being less intelligent or having limited capabilities. Can public data be used to train the model — and then this intelligence learned through public data be sold commercially? We need to have a clear policy on these issues.

The role of humans: The relation between humans and machines has always been that of a master and an assistant. Human beings always are the owner of the tasks, and the machine just assists them in these tasks. But when the machines become more intelligent, they challenge the position of humans.

Humans have, for centuries, acquired or used their natural skills to make a living. Making a living through jobs is at the core of our social and economic setup. When a software becomes so intelligent (in fact, they have already) that it can replace a large number of job roles, the human and the computer may have to compete for jobs. All inventions are always intended to make human lives easier and safer. But AI software is replacing humans and taking their jobs at a scale never seen before (initial estimates range from 15% to 25%). So, the question arises whether we need a policy here governing potential economic disruptions from these tools. All of this also needs to be incorporated into the policy for such software.

Art and creativity: They have been at the core of human civilization. Humans have never been challenged in this field, but now, these AI-based softwares can create images, creative content, paintings, music, video, lyrics, poems, blogs, essays, articles and many more forms of creative output. There are too many artistic areas where AI will compete. Let’s hope human artists will have an edge here — at least at the top end. The low to bottom end of these arts shall be dominated by computers due to their rich data and computing power.

Conclusion

All the above and many more aspects of GAI impacting human life compels us to think about the need for new policies to regulate these new-generation, super-powerful and intelligent systems. The policies, frameworks and guidelines to regulate these intelligent software(s) must be collective initiatives involving governments, public, legal, corporate and political bodies. The need for such a framework is already being debated by CEOs, scientists and governments. This exercise shall take years to complete and needs to be accepted by all the stakeholders.

Corporate responsibilities of the IT industry and its leadership in this new-age initiative shall be a key milestone to define the shape of the future of AI for humankind. AI developers should be educated by organizations about the ethical and moral issues related to their models. Organizations should help in defining ethical guidelines and standards that provide principles and best practices for responsible development and use of AI models.

References:

  1. Autonomous vehicle reality check after $160 billion spent | Automotive News (autonews.com)
  2. https://en.wikipedia.org/wiki/Humanity_(virtue)
  3. https://www.researchgate.net/post/Is-AI-artificial-intelligence-a-man-made-intelligence-that-is-full-similar-to-humans-or-only-enough-to-be-a-tool-for-humans
TAGS:
Artificial Intelligence
Share On