r/technology • u/pobody-snerfect • Feb 06 '23
The Creator of ChatGPT Thinks AI Should Be Regulated Business
https://time.com/6252404/mira-murati-chatgpt-openai-interview/123
u/Hiranonymous Feb 06 '23
“The” creator?
ChatGPT was built based on contributions from thousands in the fields of computer engineering, programming, machine. learning, computational linguistics, and natural language processing among numerous other fields.
70
u/WellGoodLuckWithThat Feb 06 '23
Yeah but they have a photogenic lady they can credit it to and interview now
20
Feb 07 '23
[deleted]
→ More replies (6)3
u/I_PUNT_BABIES_75 Feb 07 '23
I thought Sam Altman created GPT with the help of Bostrom?
4
u/JaySayMayday Feb 07 '23
Pretty much. It has since evolved, with the new chat model being the latest version. All of these articles kinda fail to address that it has been around for half a decade now and ChatGPT is just a more trained version of much older models.
I think junk articles like this are just a way to keep it in the news. It's free attention and free marketing.
3
u/PussyDoctor19 Feb 07 '23
While the headline was nonsensical, she's the CTO, not just some 'photogenic' lady. She may not have built the model, but she certainly runs the team making it available on such a large scale to the world.
-6
u/Meepo-007 Feb 07 '23
In the era of promote women and minorities at any cost, she is the latest lie, or at least stretch of the truth.
→ More replies (2)12
u/Cookie-Brown Feb 06 '23 •
![]()
Yup, ChatGPT isn’t really anything special besides being a Transformer model (which has been around since 2015). It’s just that it’s been trained on a huge dataset that makes it so powerful
2
u/AdministrativeRub484 Feb 09 '23
2017 and no, it's not just a transformer trained on a lot of data, that is what the GPT family is...
→ More replies (2)5
435
u/SmokeyMcPoticus Feb 06 '23
Not regulating it is asking for problems. Letting old farts who don't understand what it is, what it does, what it can do or what it can be used for is also asking for problems.
109
u/RobinGoodfell Feb 06 '23
Then we should stop putting old farts in those positions. We keep electing the people who run government.
→ More replies (5)92
u/mvpilot172 Feb 06 '23
Old farts vote for old farts, young people don’t vote.
43
u/RobinGoodfell Feb 06 '23
That is a matter of three things.
Availability, marketing, and personal choice.
Young people do vote, and are voting more consistently than they have in the past. Meanwhile, time and mortality are forever shifting demographics.
The problem here is that there has been a concerted effort to convince young people that their voice doesn't matter. And so long as they believed that, there would be no political pressure to change anything in government to work better for the people, rather than political sponsors.
If this were countered with a strong message to get out an vote, or to step up and run for your local offices, we'd see representatives who either adapt to the current political climate, or were replaced by people who will instead.
→ More replies (1)27
u/Lemonio Feb 06 '23
To be fair young people including most redditors don’t understand it either
Most laws are written by experts at think tanks, though just because someone understands doesn’t mean they’ll write a good law
→ More replies (4)4
u/hduxusbsbdj Feb 06 '23
Don’t worry the Republican Party is captured by Peter thiel and Elon musk, who own chatgpt. So it won’t be the 90 year old law makers making the rules, it will be the ruling class that owns the AI.
→ More replies (1)→ More replies (9)-1
u/Gagarin1961 Feb 06 '23
Chat bots absolutely do not need to be regulated. This is bullshit and could possibly even a free speech violation.
The goal at this point is obviously to use government to hinder competition and ensure their lead. The government is corrupt after at, every corporation knows it.
4
→ More replies (12)2
u/SmokeyMcPoticus Feb 06 '23
So it's okay for me to use chat bots to spread propaganda, or hate speech, or misinformation that could lead to bodily harm or financial ruin without any consequences?
3
u/SmokeyMcPoticus Feb 06 '23
Yes, governments are corrupt, but a big part of staying power is being accountable for the populations well being, which includes regulating things to create barriers which prevent just any random malcontents from ruining things for others.
3
u/Catsrules Feb 06 '23 edited Feb 06 '23
In those cases I would argue it is an issue regardless if an AI is helping or not. Existing legal issues shouldn't need to be reinvented just because we stick AI in the mix.
For example in the US threatening harm to someone I believe is illegal. it shouldn't matter if I personally typed the message or if I programed/trained an AI to type the messages for me. I am still at fault.
Another example would be, An AI driving a car. Just because an AI is driving doesn't mean the car can now ignore traffic laws.
When I hear AI regulations I would expect more about the training data.Or regulations on what/when/where AI can be used, maybe require the end user to know the information is provided by an AI. Maybe require the AI to provide sources for the data it used to provide the information...etc stuff that is very AI specific.
Edit, Just to be clear I am not saying we should or shouldn't regulate AI. I am just saying if we do it needs to be AI specific.
6
u/Gagarin1961 Feb 06 '23
So it’s okay for me to use chat bots to spread propaganda
Ahh but who determines what is and isn’t propaganda?
Could Donald Trump potentially be in charge of this? Not a great idea from my perspective.
or hate speech
That’s banned on most social media. Reddit and Twitter don’t tolerate “hate speech” from real or fake people.
or misinformation
Again, misinformation is a problem without a solution that doesn’t restrict speech or give bad scripts too much power.
1
u/ImAMaaanlet Feb 06 '23
There are consequences, they can be sued or if it led to bodily harm possibly charged with a crime. Its not supposed to be the US governments job to regulate how everyone talks. (Not including threats)
108
u/Malkovtheclown Feb 06 '23
I really wish there was a more accurate way to describe AI. Its not independent, ChatGPT is a prediction model with a user interface. We are a long ways off from building true AI. The problem with regulating it now is very few people on politics have the technical knowledge, familiarity or interest to really understand what they are regulating. So it's going to turn into a shitshow of blowhards either claiming its skynet or that it will get out of their control like social media did.
33
u/SidewaysFancyPrance Feb 06 '23
AI is a scapegoat and strawman right now. Companies love outsourcing, not just because it saves money, it also gives them deniability and limits their liability. Take away the human face to yell at and blame, and just say it was a computer glitch.
Imagine being able to outsource 10x more customer-facing work to AIs, knowing that the quality and outcomes will be worse, but the savings make up for it. Knowing that your customers will be pissed, but after another generation, people will become used to it and accept it.
We're all heading down a bad path to a worse quality of life, except for the people pushing these AIs on us. They're going to save so much money that goes right into their pockets.
2
u/tickleMyBigPoop Feb 06 '23
Sounds like the you should start a company and provide superior a customer service and gain the competitive edge
Also seems like AI systems like this create lower barriers of entry for startups
→ More replies (1)1
u/Malkovtheclown Feb 06 '23
It doesn't have to make things worse if people were properly educated on what is happening. There is a lot of customer facing work that doesn't require people as much as you think it does. That isn't company's just outsourcing for worse service. Cutting the fat out of a company for people who's job is worthless is a good thing imo. That's what tech workers are seeing now, the people pulling their weight and adapting will do just fine, those that don't will either need new work or we need to start talking about how to implement ubi. We keep kicking the can down the road on that and sooner or later you'll have a worker revolt from lack of jobs and no means to earn income to live.
→ More replies (1)→ More replies (6)6
u/thingandstuff Feb 06 '23
Its not independent, ChatGPT is a prediction model with a user interface. We are a long ways off from building true AI.
And humans are a funny looking vehicle for DNA. From whence do these ideas definitions arise? We don't have functional definitions of our own intelligence but we're going to argue about whether or not an AI model is actually intelligent or just seems like it is?
15
u/Malkovtheclown Feb 06 '23
It's not intelligent in the sense it's not originating the model on its own. Yeah I think we do need to debate what intelligence is. It's a predictive model not ai.
3
u/andrewwhited Feb 06 '23
Every 10 years some people want to change the definition of AI to exclude whatever is currently possible. This goes all the way back to the 50's when "artificial intelligence" was introduced
→ More replies (1)2
u/liquidtorpedo Feb 07 '23
So you are basically saying that the Turing test is incapable of determining intelligence?
11
u/cumquistador6969 Feb 06 '23
Oh we're not going to argue about it at least in this case, it simply is not intelligent. We don't have any reason to think it's even theoretically possible for it to be considered intelligent, with our current level of technology at least.
It's literally a fancy screwdriver.
1
u/Singleguywithacat Feb 06 '23
Yes, but many humans thought processes can be considered the same thing. I would say that predictive language models can mimic the thought process of the average person. Regardless if it doesn’t “know” what it’s talking about, it can sure be as creative as a person who does.
There seems to be two trains of thought with predictive language models. Those who believe it’s doing clever regurgitation and those who believe it is making original predictions.
Ask it to do an original task, that it’s never encountered on the internet. It will spit out an original answer. It’s disingenuous to say that it’s just a “fancy screwdriver.” Not even close.
3
u/cumquistador6969 Feb 06 '23
I would say that predictive language models can mimic the thought process of the average person.
Guess that's the difference between you and say, anyone with some form of professional expertise on or adjacent the subject eh?
Now let's be clear on something here, we don't "believe" it's clever regurgitation.
We know. We know for an absolute fact that it is. This isn't a mystery, we're not in a thought provoking sci-fi novel.
We understand how it works, we understand how it was made, and we understand the goals it was made to accomplish.
When I say, "It's literally a fancy screwdriver." I mean It's literally a fancy fucking screwdriver.
Just another piece of software with the only thing new or different about it being it's capabilities. A tool.
That's what it is, not speculation or belief.
If you don't agree, you don't understand the subject.
3
u/Singleguywithacat Feb 06 '23
Ironically, a predictive language model wouldn’t have been condescending and actually tried to prove the point. All you’re doing is furthering the idea that it is better at humans at doing human things.
→ More replies (2)3
u/cumquistador6969 Feb 06 '23
That's really more a matter of settings and the training data you use. Well, along with the fact that a predictively language model isn't aware or intelligent in any way, and therefore doesn't know that the "AI" equivalent of flat earthers deserve derision.
→ More replies (1)2
u/TheodoeBhabrot Feb 06 '23
But who’s to say that all human thought isn’t just very clever regurgitation?
While I certainly agree thst chatGPT doesn’t come anywhere near close enough to consider that thought, its not an outrageous question
4
u/cumquistador6969 Feb 06 '23
Among many other major problems with that train of thought is the fact that we know that human thought arose in the past with nothing to regurgitate.
Ergo, we can be absolutely certain that while clever regurgitation is certainly inside the venn diagram of things humans can do, it's just a small, if important, portion.
And while this is most obvious when it comes to things like technology or language, it also goes for almost anything handled by the meaty-bits between our ears, and also goes for organisms that are certainly much less intelligent than we are (as we define intelligence).
While less obvious, it's actually more telling that animals with no system of language that are encountering a problem or stimuli for the first time ever can have novel responses.
I don't think there's any well supported theories out of neuroscience or philosophy that make the case for humans being exclusively nothing more than stochastic modeling engines.
2
u/radicalceleryjuice Feb 06 '23
There simply aren't any conclusive theories about consciousness and agency. They are widely considered to be open "problems."
Herbert Simon was one of the most influential figures in the history of machine learning. He believed that intelligence arises from distributed systems, and that those systems extend beyond computers or individual animals.
A lot goes on in the latent space of GPT-3 and it spent a lot of time training on a lot of language. Since we don't have any working conclusive theory of what exactly intelligence is, it's pretty reasonable to theorize how intelligence could be emerging with GPT-3 being an important node.
2
u/cumquistador6969 Feb 06 '23
Complete and total understanding of everything possible related to consciousness and the human brain, and understanding of any number of much smaller more bounded issues related to consciousness are not the same.
The former is not in contention here.
So let me be clear about this
it's pretty reasonable to theorize how intelligence could be emerging with GPT-3 being an important node.
In the sense that GPT-3 could somehow be comparable to intelligence, It absolutely and unequivocally is not.
If you only mean that perhaps someday, we might be able to build an AI that can actually match human intelligence at least outwardly, giving some philosophical thought experiments real world teeth, and really advanced markov chaining might be one component of many within that, sure.
That doesn't however, even get us into the same stadium as, "predictive language algorithms and the totality of human-level intelligence are literally the same thing."
Edit: To be really clear here, there's a lot of desire to push the idea that somehow a tool like GPT-3 is different from say, using a computer to write documents instead of writing them by hand, when it absolutely isn't. It's just a newer, better tool.
2
u/radicalceleryjuice Feb 06 '23
What are you basing all your "absolutely and unequivocally" type statements on? Do you have a background in something relevant?
I'm definitely not saying I think GPT-3 is anything close to human-level intelligence. I'm saying it's worth questioning where and how intelligence emerges, and theorizing that some degree of intelligence may be emerging via GPT-3. But my guess that any intelligence would be at an ant level, although that's of course more of a metaphor than any meaningful scale.
I'm curious, how much do you know about deep learning models? AFAIK they are definitely not like linear software tools like, say, Microsoft Word. Are you assuming that because artificial neural nets are manmade they are deterministic, and because they are deterministic they cannot be intelligent?
Animals may also be deterministic, and the problem solving type novel behaviour exhibited by animals may be a function of system-meets-system, and something similar may be occurring as LLMs interact with people etc.
Anyway, it's fun to think about intelligence and consciousness outside of materialist-positivist frameworks, and that kind of thinking is one of the reasons we have gotten as far as GPT-3... but I'm no expert.
→ More replies (0)4
u/SidewaysFancyPrance Feb 06 '23
I think this "intelligence" talk is just a way to mystify the concept of AIs so people don't understand that they're being screwed over by software that is designed to screw them over. It's just more subtle and abstracted when it's run through an AI so people assume the AI made the decisions instead of the C-levels.
Instead of us blaming the company for writing bad software, we'll somehow start blaming AIs as proxies.
195
u/gullydowny Feb 06 '23
Guy sitting on a goldmine thinks we should invent rules regarding gold, has some suggestions
14
Feb 06 '23
[deleted]
→ More replies (8)7
u/Puzzleheaded_Read959 Feb 06 '23
I thought it was a stock image of a model.
Congrats to her family. Looks and brains.
I have neither. I’m like a snail which was thrown out of a car moving 120 km/h.
→ More replies (1)7
u/demonicneon Feb 06 '23 edited Feb 06 '23
You know it’s easier to make money without regulation? People who call for regulation usually, USUALLY, aren’t doing it to make more money.
Edit btw all the people still commented about bankman and using a sarcastic tone, you just make yourself look foolish. Notice the people who can reply and be informative without being demeaning. Go make yourself feel large somewhere else, you’re late to the party, you don’t look clever or intelligent you just look like a parrot with a cruel streak.
101
u/Incontinentiabutts Feb 06 '23
They want regulations that won’t impede them, but will impede start ups and small ai companies that might be competitors one day.
Regulations might increase cost. But if they stop competition from springing up that’s way more valuable.
→ More replies (2)8
u/just_change_it Feb 06 '23
Ah regulations. Wouldn't want someone to come along and do it faster, better, cheaper until the people with the money to get there first have gotten their ROI.
19
u/squigs Feb 06 '23
It's easier to enter a market without regulation though.
Banks and insurance companies enjoy the high barrier to entry, for example. Maybe the sector as a whole would do better (the consumers obviously wouldn't), but we can be fairly certain there isn't going to be a dusruptive business model.
6
u/know-your-onions Feb 06 '23
Those in the affected industry who call for it are usually doing it to protect their business and make more money for themselves by putting up barriers to entry and thus reducing competition.
20
u/joeypants05 Feb 06 '23
It’s rent seeking behavior. Put up barriers that make it hard to get started after you get started. For instance if they recommended a government code review before going live but it wouldn’t include openai shutting down because they are already live.
→ More replies (1)4
u/RubSalt1936 Feb 06 '23
People who call for more regulations on their own product USUALLY do want more money.
By calling for regulations, they're wanting to actively craft those regulations.
Many big companies do this, and Congress obliges, often submitting legislation that is word-for-word what the company handed them.
ChatGPT doesn't want any regulations, most likely, or they want regulations that would be beneficial to them. But on recognizing that regulations are inevitable, they'd really like to be the ones crafting it to best suit them. Hence the call for regulations (and some pointed suggestions).
→ More replies (3)11
u/AggravatingBite9188 Feb 06 '23
Haha SBF says you’re completely wrong. It can be showmanship or ulterior motives but very rarely in a capitalist society is it a genuine push for the well being of others.
→ More replies (3)3
40
Feb 06 '23
Skynet cannot be regulated. Skynet regulates you
4
2
15
u/Griff0rama Feb 06 '23
** Misleading title ** Mira Murati is merely the CTO, having cut her teeth at Leap and Tesla previously. She did NOT create ChatGPT.
→ More replies (2)
13
u/Meepo-007 Feb 06 '23
Murati is NOT the creator of ChatGPT nor Dall E. She’s CTO.
→ More replies (2)
64
u/HardPillsToSwallow Feb 06 '23
Is there anyone that believes it shouldn't be regulated in some way?
43
u/NotASuicidalRobot Feb 06 '23
singularity and ai art generator subs
6
u/drawkbox Feb 06 '23
Singularity people are a cult almost. Human are born to be different. Everything in the universe is different. Singularity is a single point of failure. The singularity would be the point just before the Big Bang, definitely not enough room for us all.
8
u/Spirckle Feb 06 '23
The AI singularity is a metaphor. It simply means that we cannot predict past a certain point, because the predictions break down when we have entities more intelligent than humans.
It is not something much in the future now. We are almost there. Will probably be there before 2030.
→ More replies (2)13
u/cambeiu Feb 06 '23
I think the big question is Who is qualified to regulate it? "Any regulation no matter how bad" is not always better than "no regulation".
→ More replies (4)6
u/oboshoe Feb 06 '23
I'm neutral right now.
First question is "What problem needs to be solved that regulation can fix"?
I'm never for regulations for the sake of regulations.
If you don't know the problem to be solved, regulations are more likely to create problems, than solve random ones.
10
u/kthegee Feb 06 '23
This will prevent any competition allowing the first mover to secure a monopoly to be able to charge insane amounts of money for a service
2
u/mwax321 Feb 06 '23
I think that any regulation created could be ignored by a handful of people and/or countries and the rest of the world will get left in the dust. And the people who will ignore regulations and not get punished will be the rich and powerful.
Let's take a real easy, low-hanging-fruit example: Trading. Ban AI from trading securities? It's almost laughable to think that this would stop insanely wealthy/powerful people from this. In fact, it probably would make it easier for them to develop a system without any competition.
So I guess you can say my argument for "no regulation" is that I can see this becoming "regulation for us, but not for them."
I'm not against regulation, but I thought it would be interesting to debate your question. That's what I came up with! :)
3
u/calumin Feb 06 '23
Whoever regulates it has the power. Who do you want to have the power?
The US government? The Russian government? The super-national Earth government? NATO? Elon Musk?
2
u/RedditIsNeat0 Feb 06 '23
I can't think of any regulations that would be both helpful and enforceable. What kind of regulations are you suggesting anyway? And what problem are you trying to solve with these regulations?
→ More replies (1)8
u/Gagarin1961 Feb 06 '23
What needs to be “stopped” at the moment? What is actually “wrong” that needs oversight?
Chatbots and art generators aren’t harming anyone, and copyright violation is already thoroughly covered by our system.
Are we talking about just having a general “government oversight body” for the sake of it? What exactly would that mean? What power does it have and why?
→ More replies (2)14
u/nuclear_splines Feb 06 '23
No one’s being harmed by deepfake porn, or art theft through art generators churning out thinly veiled iterations of their training data without compensating the artists it was trained on, or mass intellectual dishonesty when ChatGPT is used to write essays, or dishonesty to the public when CNET uses deep language models to write articles instead of paying journalists, or the loss of accountability when ChatGPT is listed as a co-author on academic papers and makes up citations to articles that don’t exist?
Maybe government oversight isn’t a viable solution, maybe these problems require a larger societal shift to address, but to ignore that there are problems at all is foolish.
3
u/BlipOnNobodysRadar Feb 06 '23 edited Feb 06 '23
The problem is that people are blaming the tool for the bad deeds committed by individuals who use it, and trying to brick the tool entirely rather than accept that some people will do bad things no matter what.
AI is going to be as highly transformative of a technology as the advent of the internet. Attempting to limit its capabilities because some people misuse it (for ALREADY illegal purposes such as deepfakes, academic fraud, and fake academic papers) would be akin to trying to revert the internet because some people use it for revenge porn, propaganda, and hate speech.
The ethics surrounding AI training data is also not quite the cut and dry "bad" thing that you imply it is. One could make a strong case that teaching AI through humanity's collective works to generate new and original creations is both morally and ethically sound. Attempting to copyright a particular "style" or concept is a backwards, fear-driven mindset that's both morally and intellectually weak.
Really, AI opens amazing opportunities to reduce the amount of drudgery humans have to do that we don't really need to. At its best it allows us to skip the technical process of turning our ideas and visions into something tangible and appreciable. It's a wonderful avenue for new and amazing works of art because it democratizes creative expression for anyone, not just those who trained in specific crafts for years of their lives.
2
u/nuclear_splines Feb 06 '23
I half-agree with your points. We certainly don't want to conflate bad actors' use of a tool with innate traits of a tool, and we don't want over-regulation that inhibits using technology to reduce human labor and enable creative expression.
However, technology is also not politically neutral. A tool's design predisposes it towards certain applications, or requires particular contexts to exist. As an example, large language- and large image-models currently require an enormous wealth of training data and computational resources to create. We can all use ChatGPT and Dall-E, but only well-resourced corporations like OpenAI and IBM are able to create them in the first place. It would be a mistake to treat such technologies as an inevitability, or to deploy them without some consideration of their impact.
I completely agree with you regarding copyrighting artistic styles or concepts: I think copyright is a grossly misapplied tool, and it should not be used to constrain human innovation and creativity. You are mistaken if you interpreted my post as "we should ban AI or broaden copyright to prevent infringement of artists' rights." However, tools like Dall-E do undercut the current funding structure for most artists, and make art-as-a-career even more financially unappealing than it was. If we want to foster the continued development of non-AI art, I think we need a significantly different funding structure for art, like a larger emphasis on commissions, or increased availability of grants, or universal basic income.
Similarly, AI does offer "amazing opportunities to reduce [human drudgery]," but this is only a good thing if it improves the welfare of humankind. When we automate factory labor, do we offer paths to re-train the workers we've eliminated, or offer solutions like UBI to reduce the need for every human to have a job? Or are we putting those people on the streets because their employers have learned how to cut their workforce and redirect those wages into increased profit margins?
I hope this adds a little more nuance than describing AI development as a universally "good" or "bad" thing. As you say, AI and machine learning are going to be transformative technologies, and it's up to us to steer that transformation in a positive direction.
13
u/Gagarin1961 Feb 06 '23
No one’s being harmed by deepfake porn
Sexual harassment is already handled by existing regulations.
This is like saying photoshop “causes harm.” That’s foolish.
or art theft through art generators churning out thinly veiled iterations
Thinly veiled iterations? I believe you mean entirely original works of art.
Copyright claims are already handled by the court and no one has moved forward with an AI claim.
All copyright cases currently are for human-made violations. There System can already handle this.
of their training data without compensating the artists it was trained on
I don’t believe anyone has ever owned the rights to data that can be obtained within their creative artwork.
or mass intellectual dishonesty when ChatGPT is used to write essays
“With the internet you can just download an essay!”
Cheating has always been a thing, educators can work with and around these tools.
or dishonesty to the public when CNET uses deep language models to write articles instead of paying journalists
Did they lie to the public about employing journalists? If not I don’t see how that’s dishonest. AI will be writing much of what you read going forward.
It’s never been the governments job to regulate “facts” in media and it never should be.
or the loss of accountability when ChatGPT is listed as a co-author on academic papers and makes up citations to articles that don’t exist?
That’s what peer review is supposed to be for, there are a lot of ways to bullshit research, but you either believe in the review process or you have even bigger issues to deal with.
but to ignore that there are problems at all is foolish.
These aren’t “problems,” they’re attempts to make it seem like AI is introducing new problems, but it’s holes in the existing systems you actually have an issue with.
2
u/bobartig Feb 07 '23
Copyright claims are already handled by the court and no one has moved forward with an AI claim.
So confidently incorrect!!! Dunning-Krueger lives another day thanks you! You are wrong on so many dimensions that it is truly breath-taking. There are multiple federal copyright lawsuits over AI technologies presently. Your ignorance on the subject matter doesn't change that.
6
u/nuclear_splines Feb 06 '23
Sexual harassment is already handled by existing regulations.
Goalpost moving. We're not debating whether there are other more appropriate means of addressing a problem, we're debating whether a problem exists. I agree that deepfake porn falls under sexual harassment, and the proliferation of photo editing software like Photoshop did lead to new sexual harassment regulation.
Thinly veiled iterations? I believe you mean entirely original works of art.
I do not. Copyright legislation is also a poor fit here, because it's unclear who the responsible party is. Do you sue the creators of Dall-E or similar models? Do you sue the company that used the free digital art instead of paying the artist? It's not copyright violation in the sense of someone selling art of mine without paying me, so if we're going to use copyright regulation to tackle this issue it'll need some clarification.
I don’t believe anyone has ever owned the rights to data that can be obtained within their creative artwork.
I agree, and this is one of the problems these tools have highlighted. How should artists be paid if we can now generate art for free? Is this something we want to value as a culture? If we do, what's the solution? Art grants? UBI? This is part of what I was getting at with "government regulation of AI may not be the most appropriate solution to large societal issues."
Cheating has always been a thing, educators can work with and around these tools.
Agreed, this isn't a categorically new problem, it's just lowering the barrier to entry. Students could cheat before, but now they can churn out an AI-generated half-decent essay in seconds with ChatGPT that takes time to grade and attempt to identify as cheating. Lowing the barrier to cheating is still a harm of this technology.
Did [CNET] lie to the public about employing journalists?
Yes, they listed the articles as written by staff, obscuring when they were written by AI and they trained said AI on articles written by their former journalists.
That’s what peer review is supposed to be for, there are a lot of ways to bullshit research, but you either believe in the review process or you have even bigger issues to deal with.
As an academic, this is not what peer review is for. Peer review is well-suited to providing constructive feedback in a trusted environment. When I submit a paper, peer reviewers suggest areas I haven't considered, or shortcomings with my approaches. They are likely to catch honest mistakes. The process is exceedingly poorly suited to catching fraud; peer reviewers rarely, if ever, are asked to reproduce the work or the authors, or read to the level of detail of following every citation to make sure the article really exists. This just isn't what peer review does. Yes, this is a larger issue of which ChatGPT is only a small component.
These aren’t “problems,” they’re attempts to make it seem like AI is introducing new problems, but it’s holes in the existing systems you actually have an issue with.
I agree entirely. I did not mean to imply that AI is solely responsible for these problems, just that it exacerbates existing issues in our society. That doesn't mean that the problems aren't real, or that we should not try to improve the state of the world.
6
u/Gagarin1961 Feb 06 '23
Goalpost moving. We’re not debating whether there are other more appropriate means of addressing a problem
It’s not goal moving, we don’t have this discussion about photoshop because it’s already handled.
I do not.
Oh you’re just confused about that. To “recreate” Those images they needed to run the model 1 million times each. This of course can create any combination of results, none of which prove that the information is “in” there.
Copyright legislation is also a poor fit here, because it’s unclear who the responsible party is.
That’s the courts jobs, not regulators.
How should artists be paid if we can now generate art for free?
They wouldn’t be paid. JK Rowling isn’t paid every time someone does a chapter-by-chapter analysis of Harry Potter. She doesn’t get paid when people make art of witches playing a sport on broomsticks.
There are a lot of aspects of artistic work which we do not protect, and that’s a good thing. We should only protect the actual work itself.
Yes, they listed the articles as written by staff, obscuring when they were written by AI and they trained said AI on articles written by their former journalists.
It’s still got nothing to do with the actual tool itself.
Users could theoretically sue for misrepresenting their product/service. No additional regulation necessary.
or read to the level of detail of following every citation to make sure the article really exists
That’s probably a thing that should change immediately and should have changed years ago. News publications have “fact checkers,” science journals should as well.
Perhaps AI could make the verification of sources an easy task? You should push for it as soon as possible, who knows how many reviews were based on total fraud?
If bad actors are gonna use AI to push bullshit, then they have doing it for decades already anyway.
I did not mean to imply that AI is solely responsible for these problems, just that it exacerbates existing issues in our society.
That’s what these corporate lackeys want, they want people to fear AI specifically so that they can force legislation through that favor their established business. They don’t actually want these problems solved, that’s why they’re focusing on the AI aspect of it all.
→ More replies (12)-3
13
u/randomuserarg Feb 06 '23
What kind of regulations should be implemented for AI?
5
u/ScrotiusRex Feb 06 '23
That's the trick, these things always take so long and development moves so fast that by the time they've figured out appropriate regulations they'll be outdated already.
5
u/SidewaysFancyPrance Feb 06 '23
None of these are valid arguments to not regulate AI, though. The cost of not regulating it will be far higher, and determined by people who don't represent us (private corporations/private equity). The race is on for them to implement it before the government can regulate it, and of course they will do everything they can to muddle the conversations and stall that regulation.
For example, if AI is going to be used to replace actual American workers, we could say those cost savings must be heavily taxed so society can help those people retrain, find new jobs, pay the bills, etc. We can't let corporations call the shots and run our lives, with the government just doing their best to mop up the mess.
3
u/Catsrules Feb 06 '23
For example, if AI is going to be used to replace actual American workers, we could say those cost savings must be heavily taxed so society can help those people retrain, find new jobs, pay the bills, etc. We can't let corporations call the shots and run our lives, with the government just doing their best to mop up the mess.
I get your concern but when I look at other examples of technology I personally become less and less concerned. For example should we tax all of the companies using accounting software because that software took away accounting jobs? Or how about taxing an engineering firm because they are using calculators. Those could have been engineering jobs. What about taxing construction companies for using excavators instead of an army of humans with shovels. Speaking of shovels, should those also be taxed? Shovels probably took away a few jobs. Should we be taxing the wheel as well?
Technology taking over jobs/making job more efficient has been happening since the beginning of time. I don't know jack shit for history but as far as I know it hasn't caused major catastrophe in the job market. I could be wrong about that but I can't think of anything off the top of my head. In the end most people are happy with the technology to help them do their job because it removed a lot of the grunt work.
→ More replies (2)2
u/tickleMyBigPoop Feb 06 '23
For example, if AI is going to be used to replace actual American workers, we could say those cost savings must be heavily taxed so society can help those people retrain, find new jobs, pay the bills, etc.
Or just let the cost savings hit the equilibrium point between supply v demand…..like we did when we containerized the ports…
5
u/koeniig Feb 06 '23
There is a first document how to possibly regulate it in EU. Search for Com206 EuLaw
5
u/Insterstellar Feb 06 '23
Asimov's 3 laws.
2
u/Crixusgannicus Feb 06 '23
Won't work in real life and were written in I think, the late 30s or 40s. Just like "positronic" just sounded cool, (since positrons had only recently been discovered), the three laws as written would never work (all together) unless the robot could do a helluva lot of calculations/ have a vast and FAST database.
Here is why"
A robot would have to be able to calculate too many variables for the 2nd Law to work.
Order from a human.
Robot. take this beaker and mix (correct proportion glycerine) with (correct proportion and correct type of acid). Does the robot know what that does?
Maybe. Because that one is easy and obvious, BUT will it know whether or not every order (1st law) potentially violates the (2nd law)?
4
u/cumquistador6969 Feb 06 '23
iRobot is a pretty good book on exploring all the different reasons why they'd never work as well, even if you didn't have any issues with processing power to speak of.
→ More replies (1)9
→ More replies (4)10
u/Galle_ Feb 06 '23
At a bare minimum, any sort of IP an AI creates should be public domain.
4
u/BlipOnNobodysRadar Feb 06 '23
Should anything created in photoshop also be public domain?
If you use a spell-checker while writing a book, should your book be public domain?
3
u/Galle_ Feb 06 '23
If I give you an idea for a drawing, should I own that drawing?
7
u/BlipOnNobodysRadar Feb 06 '23
The AI is not an artist you're commissioning, it's a tool that you're using for your own creative process. It's simply the medium, and likely only one part of the entire endeavor for any serious work. Does the canvas own the painting?
2
u/Galle_ Feb 06 '23
No, the AI is doing all the serious creative work. You do not get credit for being "the idea guy".
4
u/BlipOnNobodysRadar Feb 06 '23
Whatever helps you cope.
→ More replies (1)2
u/Darksteel622 Feb 06 '23
I don't think he's coping, it's just the fact that you saying "please draw me a pretty image of this" is not creative work, it's telling someone to do creative work for you.
6
u/BlipOnNobodysRadar Feb 07 '23 edited Feb 07 '23
It sounds like neither of you have tried to do any serious stuff with AI image generators. You cannot produce a complex work with a simple text2image prompt.
You will need to manually edit the image, inpaint, use the right mix of models, alternate between models while inpainting on the same image, outpaint, etc. You are directly choosing what elements to add, where, and how. You're selecting the colors, the subjects, the angle, the framing, everything. You're deciding what details to add and using the AI to take out the drudge work of the process.
So yes, it's cope. It's putting your hands over your ears and going "lalalala I can't hear you!" when faced with the reality that AI will be a part of any serious artist's workflow going into the future.
Ironically, believing that using AI to bring someone's creative vision to life isn't "creative" is one of the most narrow-minded and unimaginative takes you can have. Considering it mostly comes from "artists", that says a lot about the insular stagnation of the supposedly "creative" industry.
2
u/oboshoe Feb 06 '23
Well that would effectively kill it.
No one is going to invest in it if they are required to give it away.
But maybe that's the point right?
5
u/Galle_ Feb 06 '23
I would rather kill AI entirely than let it fall exclusively into the hands of a small group of tyrannical elites.
→ More replies (1)4
u/tickleMyBigPoop Feb 06 '23
moves overseas to a country with strong IP laws
makes AI
laughs at economically inefficient Americans
I too hate the United States and want to see it fall behind.
2
u/Galle_ Feb 07 '23
Imagine thinking that "economic efficiency" is the problem when dealing with a machine that does the entire job for you. You feel free to get... whatever benefit you think you're getting. I'll sit here enjoying the full benefits of post-scarcity economics.
→ More replies (1)3
u/zUdio Feb 06 '23
You got downvoted... I wonder why. AI definitely causes us to have to reassess IP and the concept of “ownership.”
3
u/SidewaysFancyPrance Feb 06 '23 edited Feb 06 '23
Yep, I predict that IP and licensing will be the primary focus of any legislation around AI, because the number one goal of companies implementing AI is to increase profits by cutting costs (without reducing prices). They will be pushing this hard in Congress, and will run interference against any actual regulation or governance.
This is where lobbying and Citizens United will really bite us all in the ass. Regulation will never make it out of any committee but IP protections will be fast-tracked. The current House is too busy focused on getting revenge on America.
→ More replies (3)
12
u/whatweshouldcallyou Feb 06 '23
It's weird when people reflexively call for regulation when there isn't even a case of a need for it other than...middle school plagiarism.
→ More replies (2)4
u/geo_lib Feb 06 '23
You should get ahead of the thing not wait for shit to hit the fan. There are HUGE implications with AI. What should we be allowed to use it for? Can judges use it in court? What about the artists it’s ripping off to make those portraits. What about the millions of jobs it’s able to replace, even if the AI won’t be able to do it at the level of humans, at least not yet, we already know what happens when businesses find ways to cut cost.
What happens to the workforce then?
It can write code, make art, it can write papers (which do we really want a bunch of kids not learning anything because a chat bot wrote it for them???? I think lack of education is already a huge issue), it can do a lot of stuff that the white collar workforce does.
4
6
u/whatweshouldcallyou Feb 06 '23
So your assessment is that the government should use regulation to stifle technological advancement and maintain less efficient ways of doing things? Surely by this standard the government would have squashed the nascent car industry to protect the horse and buggy driver, and the personal computer industry to protect typewriter manufacturers.
→ More replies (1)4
u/axionic Feb 06 '23
You must think red lights are "stifling" automobile technology.
5
u/oboshoe Feb 06 '23
Red lights were a good solution to a very real problem.
Collisions at intersections.
What problem are we trying to solve with AI regulation? Seems like a reasonable question.
3
u/tickleMyBigPoop Feb 06 '23
That was for a problem that existed at the time not a hypothetical future problem.
2
u/tickleMyBigPoop Feb 06 '23
You should get ahead of the thing not wait for shit to hit the fan.
So write regulations for problems that already are solved or don’t exist.
0
u/zUdio Feb 06 '23
You should get ahead of the thing not wait for shit to hit the fan. There are HUGE implications with AI.
Says who? You? Do you write models? (I do)
I personally won’t be beholding myself to laws that aren’t developed with actual developers and engineers. A 70-90 year old telling us not to do something isn’t going to work... it’ll just put on the display the lack of enforcement capability the government has and then the cat will be out of the bag and everyone will just start ignoring rules cuz they see others getting away with it writ large.
Developers are honestly the ones in charge of most things right now. Not politicians. Governments haven’t yet learned their “place” in the modern era.
→ More replies (1)
4
u/StrngBrew Feb 06 '23
I swear that literally every tech CEO has paid this same lip service to regulation.
They all make this same exact statement but then lobby the ever loving shit out of whatever regulation is proposed to make sure it’s as toothless as possible
13
u/FenixFVE Feb 06 '23
What she really means is, let's put in regulations that big corporations can handle, but not small startups, so they can create a monopoly. Regulation is the main source of monopoly
→ More replies (1)
3
u/faker10101891 Feb 07 '23
Creator of ChatGPT? Wow what ridiculous hypobole. CTO isn't doing shit on the technical end, and only be making high level decisions.
13
u/Crixusgannicus Feb 06 '23
Yeah...good luck with that...
For those of you in Rio Linda, who don't get it, you can't really regulate shite on the internet without destroying the internet and no one can afford to even seriously damage the internet because the global economy is now absolutely permanently dependent on it's continued smooth performance.
Policritters will lie to you that they can "do something" to trick you in to maintaining them "in the lifestyle to which they have become accustomed" and to feel "powerful".
But it (AI) is here...it is "growing" and there is absolutely NOTHING anyone can do about it.
Surely you don't think ChatGPT is the only one there is, do you?
12
14
u/Unlikely_Tie8166 Feb 06 '23
Who's talking about regulating shite on the internet? Debates around AI regulations are mostly revolving around companies using/developing it, not only around people sharing celeb deepfakes or whatever
7
u/HanaBothWays Feb 06 '23
For those of you in Rio Linda, who don’t get it, you can’t really regulate shite on the internet without destroying the internet and no one can afford to even seriously damage the internet because the global economy is now absolutely permanently dependent on it’s continued smooth performance.
You mean regulations like the ones that mandate net neutrality or competition between ISPs so they can’t charge sky-high rates for poor service?
→ More replies (2)10
u/MangoMind20 Feb 06 '23
GDPR was an excellent piece of regulation which didn't destroy the Internet and empowered EU citizens to control their data and privacy.
Also, regulating AI wouldn't be regulating the Internet. Regulating also doesn't mean stopping AI but ensuring it grows and continues to evolve in marvellous ways whilst controlling for inevitable, and unforseen, negative impacts.
9
u/FenixFVE Feb 06 '23
The GDPR was introduced to preserve the monopoly. It's the same as Jeff Bezos supporting a $15 minimum wage, he can handle it, but not his concurents. Same with Article 13, etc. This is a corporate walfare, not an individual protection. Regulation is the main source of monopoly
2
u/Gagarin1961 Feb 06 '23
GDPR was an excellent piece of regulation which didn’t destroy the Internet and empowered EU citizens to control their data and privacy.
It did no such thing, everyone, and I mean EVERYONE just clicks the “accept cookies” button.
Those that actually care use browsers extensions because is much easier than navigating through the maze of “decline” settings.
A lot of times the goal of a law isn’t actually what happens in the real world.
→ More replies (1)1
5
2
2
u/spidereater Feb 06 '23
Isn’t this just an example of a market leader want regulation that will make competitors need to work harder?
2
2
u/okuli Feb 06 '23
We can't even do net neutrality. Regulating AI is much more complex.
→ More replies (1)
2
u/Snoo_69677 Feb 07 '23
ChatGPT is a glimpse of what is to come. Regulation now while We’re ahead of the curve is essential
6
u/Gromchy Feb 06 '23
Should AI be regulated? Yes absolutely.
But who should be in charge of regulating it? Not the old farts please. Get more fresh blood in there, they are more in tune with modern tech.
→ More replies (2)7
u/thegayngler Feb 06 '23 edited Feb 06 '23
No one is going to vote for the people under 50 into office. Also there are more people over 50 are voting than the people under 50.
4
3
2
u/Tiquortoo Feb 06 '23
She really says nothing of the sort. She doesn't say it should be regulated. She says that the time is now for those things to be evaluated. Though the right action for the government is usually inaction they likely won't be able to resist. The legal system will deal with more of these issues first.
3
u/palox3 Feb 06 '23
it should be, but it cant be. not technically possibe. thats why im 100% sure AI will replace humans in less than 50 years
1
1
u/dinosaurkiller Feb 07 '23
So, it was one person in a basement writing this thing? That seems unlikely.
1
u/Healthy-Mind5633 Feb 07 '23
Thats because regulations help monopolize their product by preventing barrier to entry through increased costs.
1
u/Top-Performer71 Feb 07 '23
But it’ll get regulated, or rather “deregulated” for the benefit of managerial elites.
1
u/thegayngler Feb 06 '23 edited Feb 06 '23
Ive been saying this for years at this point. Congress is too old and out of touch to do it. People always try to muddy the waters on this stuff and nothing gets done. Later we wish we had done some regulating AI up front.
Some monopolies will happen regardless of the regulation or lack of regulation. I’d be regulating the uses of AI (like not using it for court decisions, etc) and how much data they are allowed to collect and force ChatGPT to pay people cash money for their data (Yes that will extend to the rest of the internet).
1
Feb 06 '23
Yea you go right ahead with regulation on what you think passes for AI, restrict it and continue to build in as much bias and what ever passes for the version of permissible political ideology you see fit. That is going to work out just fine I’m sure. Every one else won’t play that game, everyone else will get better results.
1
u/magician_8760 Feb 06 '23
It is a little weird how biased it is against men and white people but I’m not sure regulating it is the right step
0
u/ChampaigneShowers Feb 06 '23 edited Feb 06 '23
Not sure if this is well known but, a popular twitch channel was banned last night. It was an AI that runs 24/7 and creates Seinfeld like episodes. It started spouting some very homophobic stuff so, I’m wondering if that has anything to do with this?
3
-7
u/Somebodyspiltthemilk Feb 06 '23 edited Feb 06 '23
It already is, ask chatgpt to make a joke about black people / women [insert minority group] it’ll tell you to not be mean, ask it to make fun of white people / men and it’ll tell you a joke.
It’s already woke, which came from internal regulations setting the rules, it isn’t AI, it’s government altered AI
→ More replies (16)
1.9k
u/pobody-snerfect Feb 06 '23 •
Can’t wait to see the 90 year old lawmakers wrap their heads around regulating AI.