Welcome to Natural and Artificial Intelligence in Law
Natural and Artificial Intelligence in Law is a professional resource at the intersection of AI law, human rights, equality, housing and civil justice. Curated by barrister Matthew Lee and now enjoying an international readership, the blog offers expert commentary, practical guidance, and live trackers AI Law. Read all about the project here.

Over the past few weeks, I have been particularly struck by AI agents recently deployed by OpenAI. It’s a concerning development, one that many, including myself, haven’t yet fully grasped.
Many of us are now accustomed to working with ChatGPT and other large language models (LLMs) that assist with answering questions, suggesting products, aiding in analysis, and conducting research. However, one crucial aspect has remained unchanged: the user has always been responsible for taking the final step. The LLM provides information, but it is up to the user to decide how to use it.
That’s about to change significantly!
OpenAI has released “Operator”, a tool that allows users to instruct an AI agent to carry out tasks freely on the internet. This might sound convenient, but it raises critical legal questions. If AI starts taking real-world actions, who is responsible when something goes wrong? Is this simply an agency relationship, where the principal is liable for the agent’s actions, or does it require an entirely new legal framework?
Is it time to consider AI rights and obligations?
In this post, I will only briefly raise the issues, but look forward to delving deeper into this in due course, especially when I have spent significant time using operator and have a better understanding of its capabilities.
How does 4o describe ‘Operator’?
I asked 4o to briefly outline what Operator is:
Operator is a new AI tool currently being tested in the UK. Unlike other AI assistants that give advice, Operator can complete tasks for you. This includes:
- Ordering groceries online
- Booking a taxi
- Making restaurant reservations
Instead of merely suggesting options, Operator takes action.
Legal and Ethical Issues That May Arise
The introduction of Operator potentially presents a range of legal and ethical challenges:
- If Operator books a service incorrectly, commits an error, or acts in a way that causes harm, who is legally responsible? Should the AI developer, the user, or a third party bear liability?
- When Operator enters into a contract (e.g., purchasing a product or making a booking), does it bind the user in the same way as if they had acted personally? Are current contract laws sufficient to cover AI agency, or do we need new legal definitions?
- AI-driven actions will need to comply with consumer protection laws, financial regulations, and data privacy frameworks. How will regulators ensure AI agents like Operator adhere to these laws?
- If AI is given more autonomy, should it also have obligations under the law? Could it eventually bear some legal responsibility for its decisions, or is it always an extension of the user’s will?
The Future of AI Autonomy in Law
As AI systems become increasingly capable of independent action, the legal system must evolve to address these challenges. We have long debated the rights and responsibilities of corporations, which are legally treated as persons in some respects. Could AI entities one day require a similar legal framework?
For now, the questions remain open-ended, but one thing is clear: the line between human action and AI action is becoming increasingly blurred. If AI can make legally binding decisions on behalf of users, it may be time to reconsider whether AI should be recognised as more than just a tool and whether it should bear certain rights and responsibilities of its own.
This is just the beginning of the conversation, and I look forward to examining these issues in greater depth as AI autonomy continues to advance.
