9987-9bf7-41a5-99e4-89619d7fb780
9987-9bf7-41a5-99e4-89619d7fb780
9987-9bf7-41a5-99e4-89619d7fb780
Is it centralized production? Where does it come from? The second one was
distribution. How do you get the product from one place to another? For what we're
gonna be talking about, for for the most part, distribution means the
infrastructure of the internet.
Has more virtue, if you will, than another? Are we capitalist? Are we, social
democrats? Are we communist? Does does that affect the way that we look at what's
produced?
So, to generally map out what we're gonna be talking about, this diagram is sort of
useful. It appears in a number of different, forms, throughout the recently richer
when we talk about generative AI, we're talking about a machine learning model
that's trained to create new data rather than make a prediction about a specific
data set. And so what we'll do today is we'll look at some of the ways in which
that definition, is is unique. If we have a broad field of artificial intelligence,
generative AI involves something called machine learning. A subset of machine
learning called deep learning and we'll talk about those on Thursday.
The use of natural language processing to create new data. Before generative AI, he
talked about and generative AI is a phrase that's been around since around 2016,
2017. It's relatively new. It's a new way of thinking about artificial
intelligence. Before that, models of artificial intelligence, what they have in
common is you have a model.
It processes information based on data that's been fed to it. Training data. Again,
we'll talk more about that on on Thursday. And then the the the the model is then
able to make predictions based on that data. The easiest way to think about this
and one that we use over and over again is, predicting text because a lot of this
work comes from, work that was done in the in the early nineties on, language
processing.
1,000,000 of dollars and a number of years were spent trying to get, AIs to do
things that we see in our search engines all the time. We type in a phrase and it
fills in the rest of your phrase for you. Every time we text, it does that. Right?
Why does your phone do that?
How is your phone able to fill in your senses when you text? Well, it actually
comes from before generative AI, but it's a form of AI. It's basically yeah? It
recognizes what you how you type. Why does it recognize how you type?
Machine learning. Machine learning. Using machine learning. Using machine learning.
Yes.
It recognizes as long as you keep typing into your text, you use certain phrases,
certain spelling, certain sentences, certain emojis over and over and over and over
again. So all it does is it fills in. When it starts to see a pattern that it
recognizes, it fills in the next thing thinking that that's what you wanna do.
Which is why if you're over 40, texting is incredibly frustrating because you've
got this phone constantly in a battle with you telling you what you're saying.
Because it's a skill to actually learn how to interact with that technology.
Right? But that's before it generated The machine is able to learn based on what
it's been fed and it's able to make predictions based on the material it already
knows. It may be early to do this but why don't I do it? The thing that's
interesting about AI model and I think I've said this before. An AI model is based
on what it's been trained with and it doesn't care necessarily about the context of
of what we're doing.
Yes? Yeah. I got one name of her that catches fire. Well, that's what I've got.
Yeah.
That catches flies, that catches fire, that catches rats. Anybody else get anything
else? Catches. Seems we haven't done this enough today. Yes?
Fish. Catches fish. Okay. But catches fire is kind of a weird way to show. Would
also be strange if we saw, you know, what is the name of a bird that catches coal?
Also, what name of a bird that catches 1? A bird that catches 1? It was a movie.
Catches 1. That's a vibe.
Okay. So you can see that what Google is doing is based on whatever the last Fish
check. Bunch of searches About the price. That have been done using phrases like
this. It's trying to figure out what comes next.
Okay. The thing about generative AI, think about this when you're using, a I should
have used Google again. Somebody do a Google search for birds that catch. And then
see at the top if you get an AI summary for your search. I did.
You did? Yeah. Birds of prey also known as raptors or birds that hunt and catch
other animals. They have sharp talons and hooked beaks and are known for their
speed, strength, and keen eyesight. Okay.
Do me a favor. Go down to the top three searches and see if and if that text is
from those top researchers. Sure. Might be Wikipedia, I think. Yeah.
Why do the top 5 search searches? Because they're the most accurate and absolutely
true responses to your question. Right? No. Quinn.
Why, Quinn? Because it's most relevant at the most time frame. It could be that
it's the most relevant. Maybe more so that's the most convenient one? Maybe the
most convenient answers.
Yes. The most clicked on? They're the most popular answers. Or It's a new they're
the answers that somebody has paid Google money for to get their response up. And
one of the ways that you get your response up is you have a bot running around the
Internet clicking on your website over and over and over and over again.
So it's not necessarily the most accurate, the most, relevant information, which is
why I always suggest that if you're gonna search, go down to the next page or 2 to
look and find your information. Look for profit. Look for for sources and so forth.
However, what Google's now doing is saying, okay, most people look at the first
first few, responses. Yeah.
It looks at those and pulls out what it thinks of the is the relevant information
for you. So the AI is giving us the most relevant points sort of based on how it's
programmed. And then it's generating something new. It's generating new text. It's
not just cutting and pasting.
It's summarizing. It's paraphrasing. It's and it's doing it based on the algorithms
that it's been trained to do. So we're machine, learning models in the past could
make predictions. They have a data set.
They can make predictions. So in banking, referring to Geiger here. In banking, you
could have all kinds of information about, mortgage payments. And then your AI
could predict, you know, which one of us is most likely or least likely to, miss a
payment. Why?
Because it just looks at all the data and it just makes a prediction based on the
data. That's not the same thing as being trained on a whole bunch of information
and then giving you, in essence a new document. It's not just a prediction. It
could be a summary. It could be a tutorial comment, or whatever.
The idea behind generative AI is that it learns to generate more objects than are
in its training set with CABIN. The objects that it makes look like the data it's
been trained on. This will be important later when we talk about some of the the
hiccups in AI models. The idea is so you do a Google search and your response is
like one of those Quora summaries when you go down and there's a question mark and
you you ask and you get a little summary or something but it doesn't actually point
you to a document. How many how many of you use those?
Never. Okay. So Machine learning then is just the process of giving a program a
bunch of information to learn so that it can either predict occurrences of
something like words in a sentence or generate something new. Deep learning, which
is important for generative AI. Machine learning in general is how AIs have been
trained for since the eighties.
Deep learning does something different. Again, on Thursday we'll map this out. It
uses something called neural nets or neural networks. The idea is very simple. You
put input into the computer.
The computer magically processes that information statistically and gives you an
output. The idea here is that in in the early days of AI, the idea was can we build
a mechanical brain? Can we build a machine that actually simulates what a brain
does? And by the 1980s, people realized we can't do that. You can't just make, we
can't make this your data in this positronic brain.
But what we can do is we can look at the way that that brains brains work. I'm
gonna use this, this phrase over and over in the course. In a language game, they
took the language of AI and flipped it around. By the 19 seventies, researchers had
not been able to make a mechanical brain. Okay?
So what they started to say was, you know what? It's not that our research has been
unsuccessful because we created all kinds of programs that do things. Programs that
play chess. Programs that that, respond to questions and fool people as if they're
they're, psychotherapists. What we've done instead is we've discovered the way that
brains work and we've been wrong about them all the time.
So the human brains and computers are both, part of the same sort of genus or
species of of things called information processors. They just process information
differently. And so the language changes and they start talking about these things
called neural networks. For a long time if we talked about computers, we talked
about input, a black box and output. You put you type a word or a part of a
sentence into your computer, it does something magical.
Then another neural network looks at the information that's been processed and
either feeds it back to you at the very beginning as input or makes all kinds of
connections. And again, we'll talk about this on Thursday. The idea though of this
new model of learning, deep learning, has led AI researchers to believe that it's
possible for AIs to process more efficiently, much more complex models, and, data
sets and, to engage in behavior that looks different than simply running a
computer. I wanna say if anybody how many how many of you are completely non
science, non tech people? Okay.
Don't worry. Through repetition and focus, that's about as technical as I'm gonna
get. Right? There's gonna be no math. It's to understand in a conceptual way how do
these things work.
And the language game of defining how we define things is very, very important. So
let's look at 2 things. Definitions. You may notice I could some of my slides with
my first class today, but anyway, okay. What is artificial intelligence?
Well, here are some definitions, some recent definitions for you. Some of you wanna
read the first one for me up here? Okay. Intelligence, display, or simulated by
kind of an algorithm or machines. So intelligence, display, or simulate.
Okay. Somebody the second one. Okay. The science and engineering of machines are
considered sponsored by the Samsung Intelligence. Okay.
And finally, he's really checking the definition. Go ahead. AI seems to make
computers do the sort of stuff that minds can't do. Okay. So anything that's
helpful, anything that's frustrating, anything that's useless above these different
definitions?
You do remember this is the agricultural production class just in case somebody's
here for accounting. So the first the first, the the the the the first thing that's
set up here and I'm gonna talk about this again maybe next week. This relationship
between humans and machines. Right? The idea that we're talking about something
called intelligence that can be displayed or can be simulated.
We're not saying that we're creating a robot. It can walk and talk and interact in
the world. I'm gonna show you one later. Out. We're saying something that can
actually persuade us that there's intelligence behind the actions.
The second definition, again, trying to make it more complicated, but saying the
science and engineering, recognizing that AI is a discipline as well as a
technology. But saying that something considered intelligent by the standards of
human intelligence doesn't tell us what intelligence is. So it doesn't tell us what
artificial intelligence is. Margaret Bogan who's actually sort of an AI skeptic in
some ways but a a world renowned AI scientist says their machines that seek to make
computers or a program AI seeks to make computers do the sort of stuff that minds
can do. And it's really important for Boden.
That mind doesn't have to be human. Animals display intelligence or goal directed
behavior, if we define intelligence that way. An ant colony displays purpose. They
go to war. They, protect the hive.
AI models, more than anything else, are about accomplishing tasks. About learning
from data whether that data is provided to them or one of the goals, is that
eventually, I'll talk about embedded systems in a minute. Embedded systems will be
able to, bring in information from the world the same way way we bring information
through. We bring in information, from the world through our senses, through our
remembered history, vicariously through the experience of others when we read,
novels or watch movies or and so forth. And eventually, with cameras for eyes and
microphones for ears and things like that, and sensors, AI may be able to collect
their own data rather than simply use the data that's provided to them, to train.
And this idea that, that, that at the top I think the definition that we'll use,
the definition that will show up in the glossary slide at the end of the week for
the for you for studying for your test is that AI refers to intelligence displayed
or simulated by technological means. They were doing AI research in the 19 fifties
with giant computers that had vacuum tubes. We now have apps on our phone that do
the same thing. So it's just technologically mediated simulation of intelligence.
So every time you talk to Alexa, every time you talk to Siri, even a long time ago
when they were nothing more than, than, audio interfaces for the equivalent of
Google searches and, commands that you could do on the keyboard, the impression
that we were interacting with a another being has been there, and that's what the
eye is doing.
So let's make some distinctions. Strong ai is often called general AI. Or sorry,
artificial general intelligence, AGI. Okay? We're gonna talk about this a little
bit later when we talk about fears and hype.
But artificial general intelligence is this idea that we can program our brain in a
box. That Alexa can eventually strike up a conversation because she's bored. Say,
you know, Dan, do you really have to be, working on that right now? Can't you play
chess? I think that sweater is really ugly.
You should wear something else. Those sorts of things. Artificial general
intelligence is generally considered, open AI notwithstanding, to be a long term
goal if it can ever happen at all. And there's questions about whether we actually
need that kind of AI. Weak AI, using our definitions, of intelligence displayed or
simulated by technological means performs in a specific domain.
So for years, the goal was to program a computer that could be the grand master at
chess. That happened years years years ago. In 2014, a hugely k forward when a much
more complicated game, Go, also took place and a computer, beat a grand master in
Go. And beat the grandmaster and go using moves not just sort of the idea in in, in
the AI chess moves is there are a finite number of chess moves. And so basically
the data set that the computers were trained on as computer memory and storage and
things and speed came up where basically all the possible moves in all the possible
combinations.
And and so basically you've got a computer that has a giant excel sheet with all
the possible moves that could be made and references to some of the classical,
conclusions to games. So if the board suddenly looks like one of the the computer
that knows what to do with. In the go situation, so that's old fashioned. Sometimes
it's actually called good old fashioned AI. In the go example, the computer won by
making moves, if it were, like, completely bizarre and made no sense whatsoever,
what about the computer to win?
Right? Again, let's use the banking example. If you have a customer who comes in
and you go to your computer and you have to go through a series of steps to follow,
you're following an algorithm. It's just not necessarily a mathematical algorithm.
It's a algorithm of procedures and policies.
And we do that every time that we're applying it to solve a problem, real world
problem. Get rid of the customer as soon as you can. Solve their problem. Maybe
upsell them. Okay.
You're doing a lot of jobs. That's sort of one of the images that might might come
to mind to folks. On the other side is brand new news story from last week. This is
Aria. Aria is a robot girlfriend produced by an American company called Robotics
America For a $175,000, you can have in hardware.
And let's just take a sec. That's too costly. Yeah. I see it now. Okay.
That's it. So here's here's the part. And meaningful conversations focusing on
companionship and interaction. Do you know any other robots? Yes.
I'm particularly interested You got a link to the whole video there. Impressions,
comments? Yes. I'm ticked. Yeah.
That was so scary. Wouldn't want her with the with the kitchen knife standing in
the corner charging. I mean, her Tesla Tesla charge spots. Right? I just want them
to, like, for it it's like for intimacy.
Right. A number of times intimacy is mentioned and yet the assumption from the
responses is that intimacy means conversation. But we won't go anywhere else with
that. Aria does not particularly seem, particularly mobile. I think she's anchored
on the ground because some of those movements that she was making would topple her
over.
So I would imagine she's on some sort of stand. So that'd be great. You know, have
it in your living room for a $175,000 I'm good. For Also, what else do you I'll
just How is the president of of robotics talking with Aria? Through his phone.
Through an app. That was the AI, not the robot. So I can almost guarantee you
there'll be firmware updates. There'll be, you know, do you wanna connect Aria to
your refrigerator? Do you connect Aria to your, your Siri?
Your Alexa. Now this being said, not just human like robots but robots that are
anthropomorphic. Robots that if you go and look at the you know, there are all
kinds of robots. They go clear or or shiny white plastic robots like in the Will
Smith I, Robot movie. That are being developed in, Japan in particular as
companions.
Particularly companions for old folks. Which means you'll be able to dump your
parents. Don't worry about them. Put them in a home somewhere and there'll be a
bunch of robots there that'll engage in conversation. You wanna talk about 19
sixties TV?
Sure. Just let me. Right? But it's not a lot of kind of so embedded AI in hardware
is often called embodied AI. I'll talk about phenomenology, which is is the
philosophy that refers to our experience of the world around us.
It turned out that not only was the robot not an AI. It was being remotely
controlled by somebody in the kitchen, with with like the equivalent of an iPad.
And it would go out and serve drinks and and say things to people. Kinda suspect
we're getting a bit of that with this. Yes.
There's a AI waiter at like Tops Pizza. There used to be. I don't know if it's
still there. An AI waiter at well, an AI waiter. A robot waiter at, for robot bus
boy at, the oh, the the King of India, restaurant at Charlotte and Union Streets.
What does that mean? Somebody sends the robot out robot has markings on the floor
and where the tables are and goes and and it's stuck in the middle of it. Right? Or
it can bring your food up to you. That's a a robot like your robot vacuum is a
robot.
Okay? It has sensors that allows it to know where certain things are, and, what to
do if it comes in contact. But that's a good example. That's a good example. Okay.
So yes? There's a newer movie on Netflix. It's called The Subseverance, and it's
about how a robot is trying to, like, do everything for their, like, taking notes
on all his, like, blood levels and heart rates and she thinks the wife is the
problem because the wife is stressing out and she has cancer. And so she goes and,
like, tries to kill the wife and stuff. Okay.
She's coming. That's good. Can't break the surrogates? It's like Alright. You may
not wanna pick up.
Alright. So if that's embodied AI, the other kind of AI we talk about sometimes is
virtual AI. Virtual AI is simply software printing on the web. Our Google search.
The, intrusions that Meta makes when we're doing searches.
And ask us, do you wanna engage the Meta AI? No. Thanks. Thanks. But also chat
bots.
Here's a picture of, a a chat bot app called Tidio. Tidio is for business and it
was designed for automated customer support. I felt like I was dealing with this
automated customer support yesterday when I was trying to talk to HR at UND. We
can't talk to people anymore. You know?
It can answer, FAQs, and it can also engage in conversational marketing. But the
thing is the distinction because we're talking about software that's being run to
create the illusion of intelligence. Whether that software is embodied in, in a, a
task specific instrument or hardware instrument like the the robot on the assembly
line. You know, it can fill boxes. It can empty boxes.
It can put components into place, but it can't tap in, for example, because that's
not what it's for. Whereas, perhaps, in the future, ARIA 6.0 will either be working
for the military or, or appearing in movies or whatever. When we think about, chat
bots or we think about, search engines and AI working that way, we don't think of
it as being material. And yet there's a materiality to all digital culture. All AI
does something, and all AI interacts in the world in some way.
Sometimes it's interacting with the database of information that it's been provided
by its human trainers. Sometimes it's been trained to go look for more information.
So for example, g p four, the engine that runs, chatbot gbt and their, and OpenAI's
paid, applications have basically been trained on much of the information that's
available on the internet by scraping, and what's something called, data mining.
Just basically going through the internet and looking for patterns and looking for
information and adding that to its overall, training set of information. So if all
AI does something and if all AI interacts in the world, we have to think about the
material aspects of even virtual, AI.
What are the material aspects of a virtual AI program? What are you running it on?
A computer? A, a phone? An iWatch?
The infrastructure that it relies on. The internet, wireless networks, the sensors
that it may or may not have. Apparently, the least private place that you can be in
the world right now is in a new car. Because not only are there, do you when you
connect your phone, is your phone connected to the internet, you connect it back to
the car company, there are sensors that are collecting biometric data about you.
Your heart rate, Your your temperature.
The actions of the car that not only can be shared and sold, or shared with the
research development team at whatever car manufacturer you're taking, but they can
also be shared with third party sources like insurance companies. So you can buy a
new car and suddenly find the next year your your your, your your insurance goes
up. Why? Because you don't slow down at stop signs and your car knows. Not only
that, by cross referencing data, they can create profiles, of of you that include
things like, religious affiliation, sexual habits, what you like to eat, because as
soon as you connect, all of your data is open to the car.
It's in the user agreement. But users you don't realize it's in the courts. You do.
Buy an old car. So just to get us started, you know.
We have this idea. So and with one last question. So if AI is a is a learning model
that simulates or displays intelligence, what's a chatbot? That's a good example
question. Yeah.
Instead of this funny intelligence, this is. Again, it's possibly. That's okay.
It's possible. Well, how how is it able?
What is it about the chatbot? What is That's the question. How many people use the
chatbot? 1. Not many people are admitting to it.
Yeah. What does it look like We use chatbot. Yeah. Is the AI on Snapchat a chatbot?
Yeah.
It's gonna give you a dialogue and probably you're gonna get an audio prompt if you
want it. Yeah. I was surprised by you. Oh, okay. So we're talking about a model.
AI is a model of some function of brain like or mind like activity that interacts
in the world and then interacts with us. And next class, what I'm gonna do is I'm
gonna talk about how the generative AI model Collects data, processes data to,
simulate some of those functions. And I'm gonna play with that language game again
and show you how we come then to say, well, if this works in this situation, it
probably works with us as well, which is not necessarily the case. It just looks
like the case. Any questions?
So you're gonna have one set of slides for this week. At the end of the week, we'll
continue with continuing how the models work. And at the end of the slides, they'll
be and you'll see them, they'll give glossary. And the glossary will just be a
number of the terms and the definitions that we've talked about. Just to remind
you, these are the things to study as well as the information that's on the on the
slides.
Okay? So today's lecture came from a number of sources. It came from the, the
Kockleberg book on AI and ethics. It came from, an issue of the MIT, review that
talked about basic information about the order of AI. And on Thursday, the
information I talk about, neuronets, I I'm drawing that, both from the books and
from a couple of articles from, business sources, and I will post that information
in our folders.
That information is for your interest. You may find one particular subject is
interesting and so here's a way to explore a little bit more. Remembering that the
material that you tested on is material from the slides or glossaries. Okay? But
those articles might be useful at the end of term writing your take home exam.
Because our take home exam is going to be a control data set that I'm providing for
you. And you don't have to go outside of the material that we have on our d two l
site when you're ready to take notes. Any questions or comments? Any more of those?
Yeah.
So what slide will be posted that can be weekly? Like, so Thursday. So Thursday. So
Tuesdays and Thursdays? Tuesday and Thursday.
This week, it's gonna be Tuesday Thursday. I'm gonna put them together in one big
slide deck. Last week's, there's there's the Tuesday and then there's Thursday.
I'll post them when I get back, to class. They'll be posted in the folder for
January 7th tonight, but there's 2 slide decks there.
And from the second slide deck, the slide deck about pop music, the slides that are
important are the slides that talk about what's production distribution, mediation,
materiality, and reception.