Skip to content
Thomson Reuters

Ethical considerations around AI in legal technology—TR Takeover of Legal Geek event

As the legal industry—like many similarly situated industries across world markets—increasingly embraces artificial intelligence (AI) to jump-start the automation, efficiency, and interconnectivity of its operations, it may be wise to pause before throwing that switch.

Even while full adaptation of AI is still in its infancy in many areas of legal, there are still stories of ethical problems with the use of AI that have bubbled to the surface. These problems—including embedded biases in AI-spun algorithms, questions over security and privacy, and uncertainty over the role of human judgment—have made the full deployment of AI and its far-reaching consequences an area of concern. It’s even created a new field of ‘robo-ethics’ and has piqued the interest of the United Nations and the World Economic Forum.

Mira Lane, Partner Director of Ethics & Society for Microsoft, recently discussed how important it is to consider the ethical aspect of expanding AI. “We don’t always see what it’s doing to us”, Lane explains. “AI tech can be a power multiplier, and it can help people scale very quickly.” However, Lane adds, algorithms that are trained on lots of data can also include biases within that data which are then reflected in these algorithmic models.

“Who will be impacted? What are the unintended consequences?”, Lane asks. “Ultimately, it means thinking about responsibility and accountability.”

To further this critical discussion, Thomson Reuters is presenting a session on ethics in AI, as part of the TR Takeover of Legal Geek on March 10, a half-day event that will feature the latest insights on the future of the legal profession and the impact of the newest legal technology.

You can register here to listen to on demand the TR Takeover of Legal Geek (originally aired 10 March 2021).

The session will highlight why it’s important to care about AI ethics, noting that collected data always reflects the social, historical, and political conditions in which it was created. “Artificial intelligence systems ‘learn’ based on the data they are given”, says Milda Norkute, Senior Designer at Thomson Reuters Labs, a team focused on AI innovation that will be presenting tangible examples of how it’s applying these principles and processes in practice.

These pre-existing conditions in which the data is collected, along with many other factors, can lead to biased, inaccurate, and unfair outcomes, Norkute explains, adding that this problem only grows as artificial intelligence and related technologies are used to make decisions and predictions in such high-stakes domains as criminal justice, law enforcement, housing, hiring, and education. “These biased outcomes have the potential to impact basic rights and liberties in profound ways”, Norkute says.

Nadja Herger, a Data Scientist at Thomson Reuters Labs, will walk attendees through how the idea of ethics in AI is considered throughout the design, development and deployment process, showing step-by-step how that high-level process unfolds.

“For AI ethics to appropriately be taken into account, it is essential to reflect on its implications at every step of the lifecycle”, Herger says, adding that means including questions such as: What is the impact of an imperfect AI system? Is there bias in our training data? How are users expected to interact with the AI system? How can we show how the AI system came to a certain decision to strengthen a user’s trust?

“It is essential for corporations to take a proactive approach with these issues, to ensure sustainable, ethical, and responsible use of AI”, Herger says.

Eric Wood, a partner at the law firm Chapman and Cutler will also discuss the specific impact of AI on companies and law firms in a discussion alongside Norkute and Herger. This session with attendees will examine topics such as creating AI guidelines for your company, how ethical considerations around AI can come up at work, or whether there needs to be stricter regulation of AI.

Overall, the session will address the most crucial issues faced by organisations when dealing with AI. How do you ensure that its use is being done ethically, and not accelerating the biases and problems already inherent in society at large?

Dr. Paola Cecchi-Dimeglio, a behavioural scientist and senior research fellow for Harvard Law School’s Center on the Legal Profession and the Harvard Kennedy School, noted previously that it’s very important for legal organisations or companies in general to determine why they are using AI in the first place. “You have to remember that with many legal organisations, the data they are looking at is either what is publicly available or data they have gathered from working with their clients. And when artificial intelligence (AI) starts working with this data, it can be a very positive thing for a law firm”, Cecchi-Dimeglio says, noting that this process allows firms to make better decisions about jurisdictions, judges, and client matters in comparable situations.

“But problems arise, especially problems with biases, when the organisation isn’t careful about from where it’s taking its data or about what portion of data it’s using and not using,” Cecchi-Dimeglio adds. “Because if you start out with a biased history, you’re going to have biased results.”

You can register here to listen to on demand the TR Takeover of Legal Geek (originally aired 10 March 2021).


AI-powered contract analysis Three reasons why generative AI will not take over lawyer jobs Generative AI for in-house counsel: What it is and what it can do for you Legal AI tools and AI assistants: must-have for legal professionals AI’s impact on law firms of every size AI made big strides in 2023 – what does 2024 hold? EU AI Act: The world’s first comprehensive laws to regulate AI AI for legal documents: Unlocking a competitive edge Everything you wanted to know about AI but were afraid to ask Is manual contract review costing businesses productivity and profits?