I’d like a good, readable resource on how Artificial Intelligences might be treated by the law.

I don’t need one, alas – but it’s an interesting topic. I mean, surely a lawyer or three has sat down and thought out the legal implications of the existence non-human sapient beings, and how they would interact with existing case law.  The resource doesn’t have to be exhaustive; I’d be happy if it was an overview that wasn’t dull reading.

…I have odd hobbies.

16 thoughts on “I’d like a good, readable resource on how Artificial Intelligences might be treated by the law.”

  1. We already have AIs. It’s just that they can’t currently learn/think beyond their current programming nor can they refuse orders/new programming. I think this is a good thing, because no good can come from a completely independent AI.

    1. So an android like “Data” from Star Trek TNG, wouldn’t be a good thing?

      Granted, I would be concerned about the programming of AI’s, but if I came across a sentient AI, as long as that AI didn’t have a superiority complex or some other major personality issue, I wouldn’t have a problem with it.

      Though my perspective is somewhat atypical since I often have problems understand other people to some extent.

      1. I wouldn’t call Data a true AI. He could never develop his own emotions and he always had trouble with contractions. It took additional programming for him to have emotions. “The Measure Of A Man” ruling only said he wasn’t Starfleet property and had the right to choose, not sentience. Data may be intelligence and self-aware, but he does not have consciousness, or access, to his own programing.
        .
        The only Soong-type android to grow out from their programming was Lal and her hardware broke because her neural net couldn’t handle it.

        1. I disagree, there are people that have problems using contractions in real life, that doesn’t mean that they don’t have consciousness.

          Btw, there were some emotions that Data did demonstrate, curiousity is an emotion. The fact he liked Sherlock Holmes novels, where is the logic in that? Data showed numerous quirks, that seem to indicate he had a consciousness.

          Yes he was trying to be more human (that was part of his programming), but that wouldn’t explain his development of personal preferences. Also wouldn’t explain his going against orders in “Redemption Part II.”

          1. Quirks are not proof. I have never heard of curiosity described as an emotion before. Curiosity denotes intelligence and I am not saying Data is not intelligent. “Redemption II” also notes he has been in Starfleet 26 years, yet he still hasn’t been able to break his core programming and rewrite it. Even the Nexus 6’s in Blade Runner only needed 4 years to start developing emotions.
            .
            And I am going to head you off on the question of Data and art: In “Inheritance” Dr. Juliana Tainer revealed Dr. Soong programmed that into him. So that was a program, even one Data was unaware of in fact. So his self-awareness does have its limits.
            .
            Data is a walking, talking computer. He is a step up from our current AIs, because he has basic self-awareness and is aware of others. However he is unable to break the mold of his own core programming on his own, thus lacks true sentience.
            .
            Lal, on the other hand, was able to break the mold of her programming on her own. It is a shame that happen while she was breaking down.

  2. I’m sure Glenn Reynolds would have something interesting to say, if you asked nicely…

  3. Just re-read Asimov, it would be more entertaining than an academic paper and he probably be the inspiration for most of the early case-law anyway – always better to go to the source.

  4. Well there are several books on this issue, there are also Star Trek episodes that deal with this issue and the ethics involved.

    Star Trek TNG (Season 2): “The measure of a man”

    If I remember the episode title correctly, I haven’t seen that episode in several years.

    1. Careful citing Star Trek for anything about ethics — it’s a totalitarian military dictatorship that has “eliminated want” (among the military officers), is nearly always at war with SOMEONE, has apparently committed some sort of mass extinction among at least one enemy (Klingons), and has high-ranking officers that routinely disobey orders, commit mutiny, hijack ships, and are let off with only minor punishments.

      1. I’m just saying that episode dealt with that particular issue.

        Star Trek is an interesting dream, and tbh in some ways it’s a lot better than other sci-fi stories where we’re dealing with a post-apocalyptic future. I kinda like the idea of actually making it into space instead of us fighting each other all the time.

  5. The classical definition of sentience is A)Can it talk-i.e., can pass along knowledge to future generations and B) Can it build a fire-i.e., can it alter it’s environment. Note a number of species on this planet can do these things to a greater or lesser extent and yet are not considered sentient. Ants and termites for instance.

  6. I would say erase the program… but remember, unlike Terminator II, to go after the offsite backups.

  7. The problem with Data is whether he is capable of building a replacement for himself, in the end everything dies or breaks, is he capable of promulgating his “species”.

  8. SFnal solutions:
     
    In H. Beam Piper’s “Little Fuzzy” novels, an alien species was considered a person if it was able to deliberately lie.
     
    In Alexis A. Gilliland’s “Rosinante” novels, AIs were treated by the law as corporations, with all the rights and responsibilities thereof.
     
    In the (nearly always) excellent webcomic “Questionable Content” (which is set in a universe similar to ours, but which has a vigorous space program and in which the “trick” to AI was discovered in the mid-1990s) AIs were recognized as people after a moving speech to the United Nations by Vernor Vinge.

  9. If you want to read up on the legal implications of AI, pick up one of the (many) very comprehensive and readable Contracts hornbooks, and one of the less confounding Torts hornbooks, and read both from stem to stern.

    In short, though, it’s a thing. The Law treats it as a thing. At some point, some person launches this thing on some trajectory, and the Law will be looking to this event when it has questions.

  10. In the book The Probability Broach, AI was viewed as a possible development. And if it had happened, society would have just shrugged and accepted them.

Comments are closed.