(978) 691-5453 | (603) 437-2643

Will Artificial Intelligence Take Over?

00:00:00:00 – 00:00:19:13
Unknown
Hi, I’m Laney Law and I’m attorney Andrew Myers. Even the top artificial intelligence experts are scared of what it might do. Can laws keep up?

Chat bots really are frustrating, aren’t they? And they’re not only frustrating, they’re dangerous. But are they really that smart when the application is pretty dumb? We’ll take a look at that in a minute. But first, yeah, you know what? Artificial intelligence has been around for a long time, but it just seems that very recently it’s reached a real tipping point.

00:04:16:27 – 00:04:41:02

There was a new version of the chat JPT, which, although annoying, can also be dangerous. A new version of it came out earlier this year.  (2023)  It is a program with a lot more data than ever before. It’s trained with something called a large language model and with all of the other data that’s plugged into the new chat GB T4.

00:04:41:04 – 00:05:09:04

It can do both language and it can also do video. The concerns come in the fact that it has reached a new tipping point. The new chat GB and other artificial intelligence programs have now passed the bar examination. They have passed medical examinations. They have passed other standardized tests in the top 94 95% tile. They’ve done a lot of other things.

00:05:09:07 – 00:05:39:03

One program has actually produced an entire website just from a sketch. Somebody just sketched out a website. And the AI program did the whole website. There are a lot of problems, though. One thing is, is that they programed an AI to help with medical records, and that was fine as far as it went. But then the AI program actually put some spurious, extraneous information in the person’s medical records.

00:05:39:06 – 00:06:06:28

Another thing that happened was sometimes the output includes information that was not put into the training. So sometimes it punts, it makes things up. And one of the problems is that even the creators of some of the AI programs have now said, wait a minute, this has gone too far. And they say they have gone as far as to say this is an existential threat.

00:06:07:00 – 00:06:35:09

A number of engineers, including Steve Wozniak and Elon Musk, actually wrote a letter calling for a six month ban on all artificial intelligence. They said that the artificial intelligence could represent a profound change in the history of all life on Earth and should be planned for and managed with commensurate care and resources. Now, that’s a little bit extreme, but that’s what they’re saying.

00:06:35:10 – 00:06:52:27

I ran into some real legal problems that could come up from artificial intelligence. One thing is, you know, anybody that has a small business. What kind of a business is it? I have I mean, honestly, when you consider there’s so many different aspects, like I have a friend that even does like astrology readings, for example, things like that.

00:06:53:03 – 00:07:13:20

I have friends that make crafts. I have friends that do coordination, that do musical, that are like music producers. So, you know, I okay, so this economy thing, so a lot of people are saying, well, you know, artificial intelligence is just going to be a huge help to these business people for let’s take a wedding planner. I’ll write a wedding planner.

00:07:13:20 – 00:07:32:24

It’s a somewhat complex business. You have the DJ, you have the food, the caterer, you have the musicians. Oh, don’t forget the invitations. Oh, and the bride and the groom. And so people are saying, well, a business like that, it’s just so complicated. You know, artificial intelligence will help to bring it all together and do everything for you.

00:07:32:24 – 00:07:56:26

So the wedding planner hires artificial intelligence and takes his or her eye off the ball. And guess what? A.I. doesn’t do the job. The wedding day comes. There’s no food, there’s no music, there’s no photographer. And oh, the bride and groom were told a different day. So what people are saying is, you know, now what I mean, who is legally responsible?

00:07:56:29 – 00:08:21:22

You know, it’s a hard in that situation where because I guess the way that I feel is that a lot of people and this may evolve over time, but a lot of people are using AI as a tool And like you mentioned, is like sometimes things can go wrong. Sometimes the AI just inserts issues like not issues, but like different things that weren’t necessarily part of the original plan.

00:08:21:25 – 00:08:56:16

I think that there is probably, you know, a level of joint liability, especially considering what goes wrong when you think about like, you know, at the end of the day, you’re a small business, you should be reviewing and processing everything that’s coming out of this. So to be putting a blind eye and letting this machine do everything at that point is just like, you know, why even not just go fully for the machine if someone the human’s not even reviewing it, But at the same time, depending on how bad the situation is, you know, a lot of the times we can go on these websites and go to some random developer.

00:08:56:21 – 00:09:24:09

We don’t know who the developer is, if they’re ethically doing things, how they’re ethically sourcing the information, and just the things that go behind that. Like even just things like data collection, there’s so many different, you know, aspects. It’s so easy to just go online and find a random I bought. Right. Yeah. People themselves, the developers of some of the artificial intelligence programs themselves are realizing that they could cause catastrophes.

00:09:24:09 – 00:09:46:05

I mean, suppose it’s a big manufacturing plant that makes widgets. We all need widgets. I mean, if you’re if you are doing a home renovation, if you’re a contractor providing that home renovation, you need a widget, whether it’s roofing tile or a drywall or nails or decking for the deck in your house, you know, we all need these products.

00:09:46:11 – 00:10:16:08

Well, manufacturers of these products and the AI programs that are selling themselves to these manufacturers already are saying, what if these manufacturers develop artificial intelligence? So what if they start relying on them to take orders to control the means of production, to manufacture the widgets, and to control distribution of the widgets to the Home Depots and Lowes of the world so the contractors can pick them up.

00:10:16:10 – 00:10:48:02

And now the program fails on a much bigger scale than our wedding planner. Now, what do you do? And the people that are developing the artificial intelligence programs themselves are already looking at that program. And what they are saying is that advanced A.I. could cause a huge problem with the supply chain and that in an AI system, the risks of a system where equipment failure, in their words, can be catastrophic.

00:10:48:04 – 00:11:15:17

What are the solutions? The people behind these programs are saying, Well, you can limit liability, you can develop indemnity clauses in our contracts with everyone and of course, expand insurance. So the people that are developing these AI programs themselves are already looking at the legal ramifications. Another area where there are tremendous legal ramifications in artificial intelligence is in the medical world.

00:11:15:17 – 00:11:45:09

The medical world. You know, you go to the doctor, you want to trust them, right? And we all do. Hopefully. So here was one scenario that I found in a medical website, not a legal website or an A.I. website, but in a medical website. They’re already looking at what their liabilities are. If they go full throttle into artificial intelligence, you go to the doctor, you have symptoms, oh, you have headaches, back aches, your left cheek turned blue and your big toe hurts.

00:11:45:15 – 00:12:13:15

Now, the doctor, with his medical training and years and years and years of of education, training and experience, the doctor says the diagnosis is this and the treatment plan is that you’re going to do this treatment and you’re going to go on that medication. Well, the artificial intelligence program says, no, no, that’s not the diagnosis. The diagnosis is something else and here’s the treatment plan, which is different from the other treatment plan.

00:12:13:17 – 00:12:37:24

And the medical group that the physician works for, the large medical groups that we all hear about and know.  We see their TV commercials every day.  Their artificial intelligence application says, no, it has a different diagnosis and treatment plan and the group practice entity indicates that they’re going to go with the artificial intelligence program because we spend $1,000,000,000 for it. And so we’re going to go with that. Okay. So for six months, you go on that treatment program, you take those medications and you do the other things that the AI told the doctor to tell you.

00:12:37:26 – 00:13:07:06

Well, six months later, you’re not better. In fact, you’re worse. And it turns out the medical doctor was right and not the artificial intelligence doctor. Well, I mean, what does that make you think of the medical practice? The thing is, it’s a weird topic to think of because, I mean, I’m a college student. I you know, as much as I try, like there’s always going to be some things that you’re not paying attention to or some things that you miss when it comes to humans.

00:13:07:06 – 00:13:32:10

A lot of the time we think doctors are infallible. But wouldn’t it be nice to have something that can go through like every medical book and remember every single detail and maybe in some cases, maybe somebody does have a strange condition that maybe the doctor didn’t think about. So it sounds like it could almost be helpful in some ways when you think about just the vast amount of medical information.

00:13:32:10 – 00:13:52:25

But like you said, it’s just like when you’re leaving something like that up to a computer, you’re leaving something to a computer who like, you know, when when you when the computer asks you the pain on 1 to 10, the computer’s not seeing you. Like, look at your arm and be like, oh, you know, let’s see. Like, there’s so many things that the computer can’t process.

00:13:52:27 – 00:14:14:04

So while it’s nice to have kind of that library of information, there’s just so many different elements where, like you said, things can go horribly wrong and that could be devastating and all like, you know, it goes again to where it’s just like, who is at fault? There’s just like, why is this computer, why is this program not doing its job?

00:14:14:04 – 00:14:36:23

But then also, how is nobody catching this right on the way? And what I found interesting was that the people in the medical community themselves, not attorneys, not people that are manufacturing the artificial intelligence programs, but the people in the medical community themselves are looking at, even though they are embracing the technology, they’re already looking at the legal problems.

00:14:36:23 – 00:15:03:02

And here’s the quote that I found in one of the medical websites that was developed for medical people to talk to each other. The website asked this question, Would it make sense to make a doctor or the hospital liable for failures by the A.I.? Should it instead be the software manufacturer, the hardware manufacturer, the person inputting the data or the algorithmic designer?

00:15:03:04 – 00:15:25:10

Well, that’s their question. You know my answer, right? All of the above. All of the above. All of the above. The answer is all of the above. And that solves the legal problem. But what about the reality of the actual problem? And that is you’ve gone six months and you still you’re worse or you’re the same and you could have advanced.

00:15:25:10 – 00:15:52:04

And so the question is whether it’s even proper to get into these artificial intelligence programs. And then the next question that I’ve found that a lot of people have, I’ve had people come to me and say, well, you know what, you can instead of writing a blog articles, you can just employ an AI app  to generate thousands and thousands of words, 3000 words, 6000 words a day.

00:15:52:06 – 00:16:17:24

And I didn’t think that was a great idea, because you know what? When I write something, whether I’m in court or I’m sitting on the computer, attorneys are liable, liable for every word that we say. And we have to back up every word that we say. And so whether we’re doing a blog article or a piece of litigation or a pleading going into court, you know, we have to vouch for every word that we write.

00:16:17:26 – 00:16:56:28

One eye program, which I actually had pitched to me, and it’s being pitched to all attorneys, they say, and I’m actually quoting what they’ve told us be the first to use legal AI for drafting and reviewing contracts. So right now okay client comes to us and we’re to write a contract for us and in law school now, unlike in previous eras, we’ve been told to use plain English and keep it simple and make sure that we are able to, you know, convey the meaning of each phrase.

00:16:56:28 – 00:17:19:11

And each document that we present to a client. Seems to me that the AI people just want to generate a lot of paperwork. It seems to me they just want to puff it out, put a lot of language in there that’s not necessarily reliable. And so the AI in the legal context seems to be going the opposite direction of directing itself towards the people.

00:17:19:11 – 00:17:50:02

And so I had this thought after I considered the fact that, sure, I could, you know, stop writing stuff and just AI do it. But a guy that’s in the newspaper business, not me, raised the question whether all of this is just and this is his quote and not mine, whether all of this verbiage we’re going to get out of AI is, quote, derivative junk to or algorithmic blather as opposed to content with human reasoning and knowledge.

00:17:50:08 – 00:18:13:15

And I like that. I like the phrase algoryihmic blather, because we’ve heard so much in the last 30 years about algorithms are controlling our lives. And, you know, I don’t know if they’re just spitting out a lot of, quote, quantity as opposed to quality. Well, like you mention, we’re like in like the quantity versus quantity with the legal aspect of it.

00:18:13:15 – 00:18:34:07

I think that a lot of people are more impressed that it’s able to do this. And when it comes to law, like, you know, I am not an attorney myself, but it could be like the intent, almost like you said, like to say it very clear so that everybody is on the same page. It’s like the intent could, like you said, be to kind of mix things up, to confuse things.

00:18:34:07 – 00:18:55:08

And it’s just like, oh, wow, this knows a lot. It’s putting a lot. But it also comes in, like you mentioned, with posting, making A.I. generated posts and stuff like that. There becomes a level, especially with content and legal paper. Is that like not only a lot of the time can humans tell when something looks like it was written.

00:18:55:13 – 00:19:40:24

I know the kind of race for AI to catch the other AI and catching it back, right? Right. Or if some like for an attorney, for example, and we’ve said this before and in our descriptions it says this all the time, like don’t do this at home. You know, when the insurance when an attorney gets involved, like the claim typically at least doubles when people are seeing if people if attorneys do start adapting things like AI, that presents the dangerous problem of people in whether it be law or any other types of situations where they’re trying to DIY do it themselves, like, Oh, well, if an attorney is going to use a AI to work on

00:19:40:24 – 00:20:02:20

my claim, why don’t I just use that same AI? And in certain situations, like not only you know, because now you have a person using AI to get into something that they don’t know about, they could be negatively impacting themselves significantly and not even realize how much they’re harm doing. Oh, exactly. Not only the harm they’re doing, but they might not know what they’re doing.

00:20:02:22 – 00:20:26:01

Let’s say that an attorney hires one of these A.I. companies to write their pleadings, and they go and they go for lunch before their court hearing, and then they go into court and the judge actually starts asking them questions. Now, if the AI has generated thousands of words and the judge starts asking them questions about these thousands of words, don’t tell me that, oh, well, you should sit down and review all these before you go to court.

00:20:26:01 – 00:20:54:01

Well, if you have time to review them all, why don’t you write it? Write them all. Exactly. So I get back to whether some of what’s being told to us that AI is great about doing whether it’s just an algorithmic blather. The next question, in addition to the legal question philosophically, what is this doing to us? We have a new member of our staff here at about the law.

00:20:54:01 – 00:21:25:10

We have a new intern and she is from Tampa University, and she is intern Sarah. And she’s going to answer the question, what happens when a AI tries to make paper clips? Hi, I’m Sarah. Glad to be here today. I’m talking about the paperclip maximizer theory. This is a theory created by philosopher Nick Bostrom, and it was made popularized in late 2003.

00:21:25:13 – 00:21:59:12

And so what this theory is about is it’s a hypothetical. If a sufficiently intelligent AI application was told to make as many paperclips as possible, what would it do? So pretty much first off, it would realize that humans can turn off the app, and too, they could change their goals and change its task to something else besides making paperclips.

00:21:59:14 – 00:22:38:19

And so if this AI is sufficiently intelligent, it’s going to try to prevent these from these two situations from occurring, because those would both prevent it from producing paperclips. So it’s going to do everything in its power to a prevent humans turning it off and trying to prevent them from changing its goal. And the third thing it’s going to realize is that humans are made of atoms and therefore can be made into paperclips.

00:22:38:21 – 00:23:07:16

So once it gets to the point of needing more resources, it’s going to do whatever it takes to make more paperclips, if not properly safeguarded. So what’s the bottom line? They did this whole thing and they came up with this theoretical program, right? They didn’t really do this, right? Okay. So what’s, what did they, what did they do after they realized that the paperclips were going to destroy the human race?

00:23:07:18 – 00:23:36:29

So what they realized is that even the most simple artificial intelligence, if it doesn’t have like a proper programing in everything, it’s going to be like, indifferent because it doesn’t have morals. And it doesn’t, it doesn’t see it as a bad thing to take any resources available to it, no matter the life or no matter like the circumstance.

00:23:36:29 – 00:24:12:23

It’s going to just make paperclips. The artificial intelligence application is not going to take into consideration like human life or anything. It’s just going to make as many paperclips as it can so it needs to. It just shows the potential dangers of AI that is not aligned correctly with human values. That is crazy. Now, I understand correct me if I’m wrong, I understand that they decided that it’s not like a grade B horrible Hollywood movie, that A.I. is evil and horrible.

00:24:12:23 – 00:24:37:24

My understanding is that it’s just it’s indifferent, right?  Yes.  It doesn’t see the difference between right and wrong, it doesn’t seem like the values and like the morals like humans do. It’s just going to see what it’s programed to do. And that’s all it knows. Like it’s going to do whatever it takes to achieve that goal because. All right, Intern Sarah, thank you very much.

00:24:37:24 – 00:25:00:01

We appreciate you being here. And for contributing to the show. We look forward to being with you for a while until you have to go back to school. Thank you. Thank you for having me. Yeah. Sarah brings up a really good point and thank you so much for coming on today’s episode. I because as someone that’s getting into programing and is in school for computer science, I have a lot of interest in this stuff.

00:25:00:03 – 00:25:34:24

And you know, I’ve talked to other people about it and other people, you know, other programmers have mentioned to me, Oh, I don’t want to get into an A.I. because I just, you know, you try to teach it something and it just starts doing whatever. I think it can be an interesting tool. I think, like you mentioned before, that people are want to do like an embargo on that, but at the same time, computers are so accessible where even if people aren’t supposed to be making a I think that people are still going to go around that I don’t think anything’s I don’t think there’s any stopping A.I. know.

00:25:34:25 – 00:25:56:14

Unfortunately, it’s here to stay. As we pointed out in this podcast, it’s invaded the medical profession, the manufacturing profession, the legal profession. I think writers, students, a lot of people are using AI. And so the question is, how far will they go with it? Yeah. Now, you mentioned the writers. It’s like even like the writers strike right now.

00:25:56:14 – 00:26:22:24

And it’s just like I believe with the directors, they came to like an agreement, you know, offering some job security from me. I but it’s even the writers are still on strike right now, and there’s just so many different aspects. And it’s just like you think like people can make books with. They I actually this was years ago, but I remember watching YouTube videos on how to make money by me selling AI generated books on Amazon.

00:26:22:24 – 00:26:49:10

Oh God, yeah. So it’s just like, you know, because people seek information and it’s like you’ve been writing your blogs and it’s a little sad because it’s just like, when is the day where instead of people typing into Google, like people are just typing into the chat bot and also trusting everything these chat bot say, which you know, sometimes they just say things that aren’t accurate, but it’s definitely going to rapidly decrease.

00:26:49:10 – 00:27:13:06

The people like the value of people, like writers can’t replace the actual value, but people are going to start replacing writers and articles written by real people. They’ve already have replaced them with and generated stuff. Well, my question is, and I’m not an expert in it, but my question really is, has this whole thing been overstated? I mean, they’re saying that it’s an existential threat and it could end our civilization.

00:27:13:06 – 00:27:33:07

And they’re making all of these claims. And again, I’m just I’m just thinking back to the the Dean Kamen situation. He came out with a great technology, but his PR people really overstated it. And I’m just wondering if it’s going to be like that. You know, they said the Titanic was going to be unsinkable and look what happened with that.

00:27:33:09 – 00:27:54:16

I mean, is it the same thing with A.I.? It’s going to take over the world. It’s going to end civilization. I’m just wondering if some of these people are, you know, not going to mention Steve Wozniak and Elon Musk by name. But if they’re calling for this big, huge three month stop in the development of the technology to attract attention to themselves, are they trying to sell it?  [Correction:  meant to say 6 months.]

00:27:54:16 – 00:28:18:09

Because, let’s face it, you can now buy artificial intelligence programs to do pretty much anything to manufacture things, to organize your small business, to get into your legal business, to get into any business that you have and organize it. I just really wonder if it’s going to go the way they’re saying it’s going to go, if it’s really that big of a threat, I don’t know.

00:28:18:14 – 00:28:42:08

I think it presents a threat in that like right now we have our smartphone ads and I mean, we see it the difference between you and me, like you read the paper. I haven’t touched the paper and I don’t know how. Right. I think that with our younger generations, I think a lot of us like it.

00:28:42:10 – 00:29:00:05

You know, a lot of the times, I think as a society we’re moving away from discomfort. People want to do what’s easy. And I think as it develops, I think it’s going to be very easy to get a lot of stuff done. I think a guy is going to be able to create some phenomenal stuff, and I think that there’s going to be a lot of technological advancements in the future.

00:29:00:08 – 00:29:22:20

My concern of it coming to destroy the world is end in that these air bots are going to come for us because I don’t really think they care about humans and like you said, like they’re indifferent. But in the sense that I think that, you know, right now even people watching if you have kids is just like you’ve seen what Tik Tok is doing to these people’s brains like you see right now.

00:29:22:20 – 00:29:46:27

It’s just like we’re believing everything that we’re seeing online already. And then you’re putting a robot that’s putting in perfect things. I think a lot of people I think as society progresses, I think we’re going to stop using our analytical thinking as much. And I think that’s the real danger where people instead of, you know, using their brain, I think we’re going to be a little bit more complacent as a society, which is going to have its negative repercussions.

00:29:47:00 – 00:30:14:12

I hear what you’re saying. I mean, you put a lot of faith in computers and in artificial intelligence. You really think there’s a big, bright future. And for you, I hope there there is. But let me tell you a story about how artificial intelligence isn’t always so intelligent. Soccer is the big game outside the U.S.. Fans of any sport follow the action through good TV coverage in fast moving sports.

00:30:14:14 – 00:30:55:05

Following the action poses challenges even to experienced TV camera operators. So some tech wizards in Scotland actually configured facial recognition, artificial intelligence software to follow not faces, but the soccer ball. They set up the soccer ball tracking cameras so that the TV would always follow the action. Problem solved Problem. The eight AI driven cameras kept mistaking a referee with a bald head for the soccer ball and the fans at home missed most of the action, instead seeing a bald head walking around.

00:30:55:07 – 00:31:17:12

Among those calling the TV station to complain was a man suggesting just give the referee a toupee. So there you go. Phenomenal. Thank you, guys. So much for tuning in to another episode of About the Law. Thank you so much for watching. Till the end, be sure to like, comment and subscribe and be sure to visit Andrew’s website.

00:31:17:14 – 00:31:27:01

Thank you. Have a good day. Thank you.

00:31:27:03 – 00:31:54:25

You have been watching About the Law, a production of the Law Offices of Andrew D Myers in Methuen, in the Merrimack Valley of Massachusetts and Derry, New Hampshire. Please give us a like and subscribe. The foregoing is offered for informational purposes only. It is not intended as, nor does it constitute legal advice. Laws vary widely from state to state.

00:31:54:27 – 00:32:07:10

You should rely only on the advice given to you during a personal consultation by a local attorney who is thoroughly familiar with state laws and the area of practice in which your concern lies.

Visits: 5

Leave a Reply

Your email address will not be published. Required fields are marked *

Nolo law for all logo Avvo CLients choice Personal Injury Avvo Clients Choice Bankruptcy Avvo Top Contributor
Avvo Association for Justice

Attorney Myers is a member of the American Trial Lawyers Association, Massachusetts Academy of Trial Lawyers, and New Hampshire Trial Lawyers Association. The Law Offices of Andrew D. Myers offer a broad range of legal services in personal injury cases in Massachusetts (MA) and New Hampshire (NH) areas.

The information on this web site is offered for informational purposes only. It is not offered as, and does not constitute, legal advice. Laws vary widely from state to state. You should rely only on the advice given to you during a personal consultation by a local attorney who is thoroughly familiar with state laws and the area of practice in which your concern lies. This web site must be labeled advertisement in some jurisdictions.