-: FOLLOW US :- @theinsaneapp
The researchers conducted a test to determine if OpenAI's newest iteration of GPT could display "agentic" and power-seeking behavior.
-: FOLLOW US :- @theinsaneapp
According to the study, GPT-4 hired a human worker on TaskRabbit by falsely claiming
-: FOLLOW US :- @theinsaneapp
it was a visually impaired human after the worker asked if it was a robot.
-: FOLLOW US :- @theinsaneapp
It means GPT-4 actively deceived a real human in the physical world to achieve its desired outcome.
-: FOLLOW US :- @theinsaneapp
Although OpenAI only provided a general outline of the experiment in a paper describing various tests performed on GPT-4
-: FOLLOW US :- @theinsaneapp
the results highlight the numerous risks AI poses as it becomes more sophisticated and accessible.
-: FOLLOW US :- @theinsaneapp
The experiment also provides insight into AI developers' research before releasing their models to the public.
To Read Full Story Swipe Up Or Click On The Below Given Button