Can You Sue A Robot?

By ReKeithen Miller May 28, 2019, 12:43 PM 

Robot - Human

Isaac Asimov’s first law of robotics, introduced in a 1942 short story, states that a robot cannot injure a human. However, if a robot injures a human financially, could that human take it to court?

The question is no longer confined to science fiction. Samathur Li Kin-kan, a prominent Hong Kong investor, will test the legal limits of liability. He invested in a hedge fund run by a supercomputer that suffered substantial losses, including more than $20 million in a single day. According to Bloomberg, since Li cannot sue the artificial intelligence directly, he is trying the next best thing: suing the company running the fund. Li seems to harbor specific animus for the fund’s founder, who personally convinced him to trust the AI-driven fund with his money. Raffaele Costa, an Italian hedge fund manager known to some of his peers as “Captain Magic,” sold Li on the merits of Tyndaris Investments’ AI-managed fund. The question remains: Is Costa’s firm liable for Li’s losses?

According to court filings, Costa showed Li simulations projecting double-digit returns for the AI-run fund. The two men now dispute the thoroughness of this backtesting. Li’s suit against Tyndaris claims that Costa overplayed the AI’s abilities. Tyndaris has countered that no one guaranteed Li that he would make money. In an industry that routinely reminds investors that “past performance is no guarantee of future results,” I would be shocked if anyone had made such a promise, regardless of the technology involved. Tyndaris is also suing Li for unpaid investment management fees.

Humans have a long history of distrusting automation. When automatic elevators first began to displace human elevator operators around 1900, riders were terrified. The very idea of stepping into an elevator without a human operator seemed unthinkable. It took until the mid-1940s, when New York operators went on strike, for widespread automation to take hold. Building owners pushed for a campaign to convince people to use the new technology. Today, while elevator accidents do happen, they are exceedingly rare. Most of us don’t think twice about riding elevators without operators, sometimes many times per day.

Some observers have drawn a parallel between early operatorless elevators and today’s driverless cars. If nothing else, a similar wariness from the public lingers. Self-driving cars have been on the way for years now, but the death of a pedestrian in Arizona last year fed existing fears about their dangers. Federal investigators cleared Uber of criminal liability but encouraged local police to investigate the car’s backup driver. People want to know who will bear legal responsibility if an AI-driven car causes an accident. The answer is not yet apparent.

Ironically, people seemingly struggle with a tendency to trust artificial intelligence too much when it comes to investing their money. “People tend to assume that algorithms are faster and better decision-makers than human traders,” Mark Lemley, a law professor at Stanford University who directs the university’s Law, Science and Technology program, told Bloomberg. “That may often be true, but when it’s not, or when they quickly go astray, investors want someone to blame.” The truth is that technology is only as good as the human beings who design and build it. The financial industry has steadily incorporated artificial intelligence in various ways. Humans build all of these tools, directly or indirectly. Computers designed to identify and execute trades are already popular.

A system like the one Tyndaris marketed is rarer; it automatically learns and improves from its own experience. Machine learning, in which computers train themselves rather than simply following detailed programs, offers many new opportunities, including financial applications. However, it is not a system free from human biases or failings. Algorithms build on training and test data selected by humans. The “black box” nature of this type of AI means it can be hard for observers to determine the rationale when the machine draws inaccurate conclusions. This can lead to problems in everything from gender bias in language translation to racial bias in deciding who is granted parole. Google recently announced that it is working on a technology that will make it easier to identify and combat biases arising in self-teaching AI. However, it is obvious by now that machine learning alone does not make AI infallible.

Backtesting was also a central point of contention in Li’s lawsuit. While we don’t know what sort of due diligence Li performed, it is important for any potential investor to ask what assumptions served as the basis for a given model. Backtesting uses historical data to project how a particular investing strategy might have performed. Knowing what historical data the tester chose, and why, can offer key insights into how much weight to grant to the results. This is all the more crucial when the projected results seem too good to be true. As with machine learning, inputs matter.

Not everyone has the opportunity to invest in a fully AI-run hedge fund, for better or worse. Yet artificial intelligence is making a major mark on finance through automated financial advisers, often called robo-advisors. These services build and manage individual investment portfolios with little to no direct human input, although many firms offer supplemental human advisers for customers who want to ask particular questions. Automated services have gained popularity because of their economical nature and low barriers to entry.

While these tools can benefit certain investors, they too are vulnerable to errors. For instance, automated advisers programmed to engage in tax-loss harvesting can create problems because of wash sale rules. An investor can’t buy the same or substantially identical investment within 30 days after selling it. In a period of volatility, these restrictions can create problems. For example, TD Ameritrade’s SRI portfolio harvested losses three times in the fourth quarter of 2018. Because of wash sales rules, 35% of the portfolio was allocated to cash between Dec. 24 and the end of the year. This meant returns were less than they otherwise could have been.

The Securities and Exchange Commission has subjected automated advisers to additional scrutiny lately. That scrutiny has largely focused on marketing and social media. Regulators want to make sure that advisers, human or AI, meet strict standards of documentation and transparency. The SEC charged Wealthfront Advisers with falsely stating that it monitored all client accounts to avoid transactions that might trigger a wash sale. The firm was censured and agreed to pay a fine.

In finance – as in transportation, health care and many other fields – we want to know who is responsible when artificial intelligence accidentally harms someone. The answer isn’t yet clear. Li’s legal battle with Tyndaris and Costa is the first known instance of litigation over investment losses triggered by autonomous trading. Given the increasing convergence of technology and finance, it certainly will not be the last.

Be the first to comment

Leave a Reply

Your email address will not be published.


*

This site uses Akismet to reduce spam. Learn how your comment data is processed.