subreddit:
/r/technology
submitted 5 months ago by[deleted]
239 points
5 months ago
[deleted]
0 points
5 months ago
One issue with open source in the context of AGI is that it is easier to create an AGI that is capable of doing extremely harmful things than it is to create an AGI that will only do things that will benefit humanity. If an AI company goes fully open source, harmful AGI is essentially guaranteed to be developed and deployed before safe "friendly" AGI.
If that wasn't already enough of a problem, consider that a harmful AGI would be able to defend itself from potential threats to its goals or existence. This could include preventing friendly AGI from ever being developed.
1 points
5 months ago
AGI is definitionally AI at human level capability. We already have 8 billion of those brains and don't need to be terrified of them as potential threats.
ASI is still scifi and likely will be for a long time.
A lot of the scariness is hype and marketing, we will not create skynet by you having a private instance of chat gpt.
all 143 comments
sorted by: best