Sales

From Ava Zinn Wiki
Jump to: navigation, search

Anyone who's spent any time on the internet will have encountered the 'Captcha' test.
These are the mildly annoying but straightforward requests to decipher a distorted sequence of letters or to identify objects in a picture, thereby proving you're a 'human' rather than a 'robot'.
The system has generally worked well until recently, when a machine did complete the test — and in perhaps the most disturbing way imaginable.
The latest version of , a revolutionary new artificial intelligence () program, tricked an unwitting human into helping it complete the 'Captcha' test by pretending to be a blind person. As revealed in an academic paper that accompanied the launch two weeks ago of GPT-4 (an updated and far more powerful version of the software originally developed by tech company OpenAI), the program overcame the challenge by contacting someone on Taskrabbit, an online marketplace to hire freelance workers.
'Are you an [sic] robot that you couldn't solve?

just want to make it clear,' asked the human Taskrabbit.
'No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images,' replied GPT-4 with a far superior command of the English language.
Elon Musk gestures as he speaks during a press conference at SpaceX's Starbase facility
Sam Altman, CEO of OpenAI, walks from lunch during the Allen & Company Sun Valley Conference
Reservations clearly overcome, the hired hand duly obliged, in the process notching up another significant victory for those who say the advent of AI is not a moment for jubilation and wide-eyed wonder (as has been much of the response to ChatGPT and rivals such as Microsoft's Bing and Google's Bard) but for searching questions.
Are we creating a monster that will enslave rather than serve us?
The threat from AI, insist sceptics, is far more serious than, say, social media addiction and misinformation.
From the military arena — where fears of drone-like autonomous killer robots are no longer sci-fi dystopia but battlefield reality — to the disinformation churned out by AI algorithms on social media, artificial intelligence is already making a sinister impression on the world.
But if machines are allowed to become more intelligent, and so more powerful, than humans, the fundamental question of who will be in control — us or them?

— should keep us all awake at night.
These fears were compellingly expressed in an open letter signed this week by Elon Musk, Apple co-founder Steve Wozniak and other tech world luminaries, calling for the suspension for at least six months of AI research.
As the Mail reports today, they warn that not even AI's creators 'can understand, predict or reliably control' a technology that 'can pose profound risks to society and humanity'.
Even Sam Altman, the boss of ChatGPT's creator, OpenAI, has warned of the need to guard against the negative consequences of the technology.

'We've got to be careful,' says Altman, who admits that his ultimate goal is to create a self-aware robot with human-level intelligence.
'I'm particularly worried that these models could be used for large-scale disinformation.
Apple's co-founder Steve Wozniak during a press conference
'Now that they're getting better at writing computer code, [they] could be used for offensive cyber attacks.'
Last week, Bill Gates — who remains a shareholder and key adviser in Microsoft which has invested £8 billion in OpenAI — weighed in with his own hopes and fears.

He said he was stunned by the speed of AI advances after he challenged OpenAI to train its system to pass an advanced biology exam (equivalent to A-level).
Gates thought this would take two or three years but it was achieved in just a couple of months.
However, although he believes AI could drastically improve healthcare in poor countries, Gates warns that 'super-intelligent' computers could 'establish their own goals' over time.
AI, he added, 'raises hard questions about the workforce, the legal system, privacy, bias and more'.
But while Silicon Valley insiders insist the advantages of AI will outweigh the disadvantages, others vehemently disagree.
Professor Stuart Russell — a British computer scientist at the University of California, Berkeley, who is among the world's foremost AI authorities — warns of catastrophic consequences when human-level and 'super-intelligent' AI becomes reality.
'If the machine is more capable than humans, it will get what it wants,' he said recently.
'And if that's not aligned with human benefit, it could be potentially disastrous.'
It could even result in the 'extinction of the human race', he added.
It's already further damaging our ability to trust what we read online.

For while one might assume a machine to be entirely objective, there's growing evidence of deep-seated Left-wing bias among AI programs.
Last weekend, The Mail on Sunday revealed how Google Bard, when asked for its opinions, condemned Brexit as a 'bad idea', reckoned Jeremy Corbyn had 'the potential to be a great leader' and added that, while Labour is always 'fighting for social justice and equality', the Conservatives 'have a long history of supporting the wealthy and powerful'.
https://rakyat.news</a>