Latest AI News

The use of AI in judicial decisions will bring opportunities and challenges: CJI D.Y. Chandrachud

Chief Justice D.Y. Chandrachud discussed AI in law at the India-Singapore Judicial Conference. He emphasized the importance of AI in legal practice and judicial decision-making. CJI advocated for embracing AI in technology and law. He highlighted the need for caution with ‘high-risk’ AI tools. These tools could introduce ambiguity and biases in judicial decisions.

CJI addressed challenges from AI in decision-making. He reflected on AI’s integration into law. It raises ethical concerns and practical challenges. “AI in legal procedures brings complicated questions,” CJI stated. The use of AI in judicial decisions presents both opportunities and challenges that require nuanced deliberation,” he stated.
He stressed AI’s broader role beyond efficiency. AI is now part of legal research and judicial deliberation. Many firms use AI for research without scalability issues. Globally, AI adoption in judicial decisions is rapid. In Colombia in 2023, ChatGPT assisted a judge in resolving legal issues. It was about insurance claims for an autistic child, highlighting AI’s role in intricate legal cases.

“The court sought clarity on whether an autistic minor should be exempted from paying fees for treatments with ChatGPT. Based on this interaction, he affirmed that under Colombian law, an autistic child is indeed exempt from such fees.”

Also Read: HP Launches AI-Powered PCs: Omen Transcend 14 and Envy x360 14

The Colombian judge also clarified that AI does not in any way undermine the autonomy and logical thinking of judges. CJI recalled a similar incident in Indian courts. The Punjab and Haryana High Courts relied on ChatGPT. They used it to gain a global perspective on bail jurisprudence.

The High Court didn’t consider ChatGPT’s input during the eligibility assessment. Instead, it was used to understand global bail jurisprudence. This was especially relevant in cases related to cruelty across the world.

The COVID-19 pandemic led to increased reliance on technology by courts. CJI cited India’s adoption of case management systems and the use of AI in judicial decisions. He compared it to the UK’s ‘Digital Case System.
CJI acknowledged AI’s creative use in the justice system’s administrative wing. The Supreme Court introduced live transcription services. These include the ‘Supreme Court Legal Translation Software.’ It helps understand legal proceedings in regional languages and Hindi.

Highlighting the flip side, CJI remarked that there are also many pitfalls in making AI good. He referenced a case in the United States. A lawyer tried to deceive the court using AI. It illustrates “confirmation bias,” where AI generates false narratives.

Without robust auditing mechanisms, AI may exhibit “confirmation bias,” yielding inaccurate responses. A cautionary US case highlights this risk, where a lawyer manipulated legal judgments to influence judicial decisions.

When it comes to grappling with AI, the element of bias is a significant issue. CJI explained how AI systems might unintentionally display ‘implicit bias,’ as seen in the case of D.H. and Others. The European Court of Human Rights addressed a situation in the Czech Republic where Roma children were overrepresented in special schools due to cultural and linguistic differences assessed neutrally.

CJI further analyzed how such bias could occur – first when learning from data containing errors or biases, and second, when making decisions using covert methods, thereby creating opaque reasoning for human developers. Such AI processes are termed ‘black-box algorithms.’

“In the realm of AI, implicit bias can manifest in two crucial stages. Firstly, during the training phase, where incomplete or erroneous data can lead to biased outcomes.

Bias can arise during data processing, particularly within the opaque ‘black box’ of algorithms. This term describes systems that hide their internal workings, making it challenging to understand decision-making processes or outcomes.

In the context of implicit bias, one can also see such examples in cases where recruitment is done through automated systems. Here, developers may not be aware of why the algorithm favors certain individuals over others.

“Such types of lack raises worries about being accountable and the chance of unfair results”

In this context, CJI referred to the proposal by the European Council to regulate ‘high-risk AI’ in judicial settings due to the ‘black-box nature.’

Such high-risk AI expressions, as demonstrated by the CJI, can be found in facial recognition technology (FRT). FRT is a tool that identifies or verifies a person based on their facial features. However, its identification capabilities can affect individuals’ privacy.

Facial recognition technology (FRT) exemplifies high-risk AI due to its surveillance capabilities and misuse potential. It automatically identifies individuals based on facial features, constituting an accessible technology. Biometric facial recognition, its prevalent form, sparks significant concerns.

Recently, cases like Glukhov and Russia highlighted privacy violations, where FRT identified an individual during a peaceful protest, leading to a guilty verdict. The court stated that such use of FRT is highly dangerous, and strict regulations should be in place to prevent its misuse.

CJI emphasized capacity building and training in AI as the solution to maximizing the effective use of AI.

In the context of solutions, the CJI emphasized capacity building and training as the foundation for responsible AI use. Legal professionals need to develop skills for addressing challenges posed by AI.hallenges posed by AI.

“By putting resources into education and training initiatives, we can prepare professionals with the knowledge and abilities needed to tackle the complexities of AI, recognize biases, and maintain ethical standards when using AI systems.” the article notes. “In addition, capacity building can promote a culture of responsible innovation, where stakeholders prioritize the ethical implications of AI development and deployment.”

The CJI commended the Council of Europe’s efforts in drafting a framework convention on AI governance, which demonstrates a commitment to developing global standards for AI governance, human rights, democracy, and the rule of law.

Looking at the bright side

When it comes to the future of AI and its benefits, the CJI D.Y. Chandrachud expressed concerns about social division and discrimination. Quoting American abolitionist Frederick Douglass, the CJI stressed the importance of ensuring that the benefits of AI are available to all, especially in the case of the use of AI in judicial decisions.

Conclusion:
Humanity has recognized the potential of AI for improvement. By embracing global collaboration, we establish a framework for responsible and ethical use of AI technologies. This paves the way for a future where technology empowers every individual, fostering inclusivity, innovation, and progress. It envisions a world where AI holds the promise of benefiting humanity as a whole.

Leave a Reply

Your email address will not be published. Required fields are marked *