Share This

Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, August 30, 2023

DIGITAL WAVE of deception




Sophisticated scam technology harnessing artificial intelligence is capable of deceiving even the most vigilant.

COMPUTER-GENERATED children’s voices that fool their own parents. Masks created with photos from social media deceive a system protected by face Id.

They sound like the stuff of science fiction, but these techniques are already available to criminals preying on everyday consumers.

The proliferation of scam tech has alarmed regulators, police, and people at the highest levels of the financial industry. artificial intelligence (ai) in particular is being used to “turbocharge” fraud, US Federal Trade Commission chair Lina Khan warned in June, calling for increased vigilance from law enforcement.

Even before ai broke loose and became available to anyone with an Internet connection, the world was struggling to contain an explosion in financial fraud.

In the United States alone, consumers lost almost Us$8.8bil (Rm40.9bil) last year, up 44% from 2021, despite record investment in detection and prevention. Financial crime experts at major banks, including Wells Fargo and Co and deutsche Bank ag, say the fraud boom on the horizon is one of the biggest threats facing their industry.

On top of paying the cost of fighting scams, the financial industry risks losing the faith of burned customers.

“It’s an arms race,” says James Roberts, who heads up fraud management at the Commonwealth Bank of australia, the country’s biggest bank.

“It would be a stretch to say that we’re winning.”

The history of scams is surely as old as the history of trade and business.

One of the earliest known cases, more than 2,000 years ago, involved a greek sea merchant who tried to sink his ship to get a fraudulent payout on an insurance policy.

Look back through any newspaper archive, and you’ll find countless attempts to part the gullible from their money.

But the dark economy of fraud, just like the broader economy, has periodic bursts of destabilising innovation.

new technology lowers the cost of running a scam and lets the criminal reach a larger pool of unprepared victims.

Email introduced every computer user in the world to a cast of hard-up princes who needed help rescuing their lost for tunes.

Crypto brought with it a blossoming of Ponzi schemes that spread virally over social media.

The future of fake

The ai explosion offers not only new tools but also the potential for life-changing financial losses.

and the increased sophistication and novelty of the technology mean that everyone, not just the credulous, is a potential victim.

The Covid-19 lockdowns accelerated the adoption of online banking around the world, with phones and laptops replacing face-to-face interactions at bank branches.

It’s brought advantages in lower costs and increased speed for financial firms and their customers, as well as openings for scammers.

Some of the new techniques go beyond what current off-theshelf technology can do, and it’s not always easy to tell when you’re dealing with a garden-variety fraudster or a nation-state actor.

“We are starting to see much more sophistication with respect to cybercrime,” says amy Hoganburney, general manager of cybersecurity policy and protection at Microsoft Corp.

Globally, cybercrime costs, including scams, are set to hit US$8 trillion (RM37.18 trillion) this year, outstripping the economic output of Japan, the world’s third-largest economy.

By 2025, it will reach US$10.5 trillion (RM48.8 trillion), after more than tripling in a decade, according to researcher Cybersecurity Ventures.

In the Sydney suburb of Redfern, some of Roberts’ team of more than 500 spend their days eavesdropping on cons to hear firsthand how ai is reshaping their battle.

a fake request for money from a loved one isn’t new. But now parents get calls that clone their child’s voice with ai to sound indistinguishable from the real thing.

These tricks, known as social engineering scams, tend to have the highest hit rates and generate some of the quickest returns for fraudsters.

Today, cloning a person’s voice is becoming increasingly easy.

Once a scammer downloads a short sample from an audio clip from someone’s social media or voicemail message – it can be as short as 30 seconds – they can use ai voice-synthesising tools readily available online to create the content they need.

Public social media accounts make it easy to figure out who a person’s relatives and friends are, not to mention where they live and work and other vital information.

Bank bosses stress that scammers, who run their operations like businesses, are prepared to be patient, sometimes planning attacks for months.

What fraud teams are seeing so far is only a taste of what ai will make possible, according to Rob Pope, director of new Zealand’s government cybersecurity agency, CERT nz.

He points out that ai simultaneously helps criminals increase the volume and customisation of their attacks.

“It’s a fair bet that over the next two or three years we’re going to see more ai-generated criminal attacks,” says Pope,

a former deputy commissioner in the New Zealand Police who oversaw some of the nation’s highest-profile criminal cases. “What AI does is accelerate the levels of sophistication and the ability of these bad people to pivot very quickly. AI makes it easier for them.”

To give a sense of the challenge facing banks, Roberts says right now the Commonwealth Bank of Australia is tracking about 85 million events a day through a network of surveillance tools.

That’s in a country with a population of just 26 million.

The industry hopes to fight back by educating consumers about the risks and increasing investment in defensive technology.

New software lets CBA spot when customers use their computer mouse in an unusual way during a transaction – a red flag for a possible scam.

Anything suspicious, including the destination of an order and how the purchase is processed, can alert staff in as few as 30 milliseconds, allowing them to block the transaction.

At Deutsche Bank, computer engineers have recently rebuilt their suspicious transaction detection system, called Black Forest, using the latest natural language processing models, according to Thomas Graf, a senior machine learning engineer there.

The tool looks at transaction criteria such as volume, currency, and destination and automatically learns from reams of data what patterns suggest fraud.

The model can be used on both retail and corporate transactions and has already unearthed several cases, includone ing involving organised crime, money laundering, and tax evasion. 

Wells Fargo has overhauled its tech systems to counter the risk of Ai-generated videos and voices. “We train our software and our employees to be able to spot these fakes,” says Chintan Mehta, Wells Fargo’s head of digital technology. But the system needs to keep evolving to keep up with the criminals. Detecting scams, of course, costs money.

The digital dance

One problem for companies: Every time they tighten things, criminals try to find a workaround.

For example, some US banks require customers to upload a photo of an ID document when signing up for an account.

Scammers are now buying stolen data on the dark web, finding photos of their victims on social media, and 3D-printing masks to create fake IDS with the stolen information.

“And these can look like everything from what you get at a Halloween shop to an extremely lifelike silicone mask of Hollywood standards,” says Alain Meier, head of identity at Plaid, which helps banks, financial technology companies, and other businesses battle fraud with its ID verification software. Plaid analyses skin texture and translucency to make sure the person in the photo looks real.

Meier, who’s dedicated his career to detecting fraud, says the best fraudsters, those running their schemes as businesses, build scamming software and package it up to sell on the dark web.

Prices can range from US$20 (RM95) to thousands of dollars.

“For example, it could be a Chrome extension to help you bypass fingerprinting or tools that can help you generate synthetic images,” he says.

As fraud gets more sophisticated, the question of who’s responsible for losses is getting more contentious.

In the United Kingdom, for example, victims of unknown transactions – say, someone copies and uses your credit card – are legally protected against losses.

If someone tricks you into making a payment, responsibility becomes less clear.

In July, the US top court ruled that a couple who were fooled into sending money abroad couldn’t hold their bank liable simply for following their instructions.

But legislators and regulators have leeway to set other rules: The government is preparing to require banks to reimburse fraud victims when the cash is transferred via Faster Payments, a system for sending money between UK banks.

Politicians and consumer advocates in other countries are pushing for similar changes, arguing that it’s unreasonable to expect people to recognise these increasingly sophisticated scams.

Banks worry that changing the rules would simply make things easier for fraudsters.

Financial industry leaders around the world are also trying to push a share of the responsibility onto tech firms.

The fastest-growing scam category is investment fraud, often introduced to victims through search engines where scammers can easily buy sponsored advertising spots.

When would-be investors click through, they often find realistic prospectuses and other financial data. Once they transfer their money, it can take months, if not years, to realise they’ve been swindled when they try to cash in on their “investment”.

In June, a group of 30 lenders in the UK sent a letter to Prime Minister Rishi Sunak asking that tech companies contribute to refunds for victims of fraud stemming from their platforms.

The government says it’s planning new legislation and other measures to crack down on online financial scams.

The banking industry is lobbying to spread responsibility more widely, in part because costs appear to be going up. Once again, a familiar problem from economics applies in the scam economy, too.

Like pollution from a factory, new technology is creating an externality, or a cost imposed on others. In this case, there’s a heightened reach and risk for scams.

Neither banks nor consumers want to be the only ones forced to pay the price.

Chris Sheehan spent almost three decades with the country’s police force before joining National Australia Bank Ltd, where he heads investigations and fraud.

He’s added about 40 people to his team in the past year with constant investment by the bank.

When he adds up all the staff and tech costs, “it scares me how big the number is”, he says.

“I am hopeful because there are technological solutions, but you never completely solve the problem,” he says. It reminds him of his time fighting drug gangs as a cop.

Framing it as a war on drugs was “a big mistake”, he says.

“I will never phrase it in that framework – of a war on scams – because the implication is that a war is winnable,” he says. “This is not winnable.” – Bloomberg

Source link

Related post:

When malware strikes

Sunday, July 9, 2023

‘Time for all to be trained to use AI’

There are encouraging signs that professionals in Malaysia are equipping themselves with a combination of hard and soft skills to enhance their employability and remain competitive. — 123rf.com
 

 

THE sooner bosses pay attention to artificial intelligence (AI) and what it can do, the better for all, including workers and the business.

As such, guidelines should be introduced by bosses in the country on how their workers should use AI in their jobs, says Malaysian Employers Federation (MEF) president Datuk Dr Syed Hussain Syed Husman.

He was responding to a proposal by the Human Resources Ministry for employers to develop their own policies and procedures for the ethical use of AI in view of its growth in Malaysia.

“This is a good suggestion as the world of work is changing and becoming more automated.

“Such a trend will continue. So the sooner we pay attention to this, the better.

“Now is the time to see how AI can help businesses and the industry, while looking at some guiding principles to help manage this,” he says.

While AI promises to smooth operations, he admits there are concerns over security, privacy, data trust, and ethics over its use.

“Businesses using AI models such as ChatGPT need to be aware that generative AI comes with its own set of risks.

“There is a need to establish rules and procedures to ensure secure implementation of AI.

“It will take time and human expertise to unlock AI’s full potential in a way that’s responsible, trustworthy and safe,” he says.

Recently, it was reported that more companies in Malaysia are exploring and integrating generative AI into their business operations.

However, not many have come up with official policies for its workers on its usage.

Some companies which have introduced guidelines have advised workers against providing personal information to AI systems to prevent any privacy issues.

While bosses are aware of the benefits AI can bring, MEF highlights the need for everyone to be trained to use it effectively.

“A lack of skilled talent and technical expertise has been a top barrier to implementing AI since its inception.

“To stay competitive in a tight labour market, companies must train their teams to use AI effectively and responsibly.

“If people don’t trust the work AI does or the data it’s built on, adoption of AI will lag and returns on investment will not be as fast as they should be,” Syed Hussain says.

In the next five years, he says bosses expect more people to be working alongside robots and smart machines specifically designed to help them to do their jobs better and more efficiently.

At the same time, jobs that can be performed through a simple search online or on ChatGPT could be at risk, says JobStreet Malaysia managing director Vic Sithasanan.

“In its place would be the prioritisation of skills to be able to query, discern, and ‘connect the dots’ or find relevance with technology that cannot replace the human touch,” he explains.

Even before Covid-19 posed a threat, job security was already on people’s minds because of automation, he adds.

“Almost every kind of worker has some level of concern.

“JobStreet’s Decoding Global Talent’s third report showed that in 2021, 46% of workers in their 20s and 41% in their 30s were already worried about technology putting them out of work.

“From media to information technology, concerns about automation are particularly high – especially among workers with repetitive jobs,” he says.

According to JobStreet, among some of the industries and jobs that may be replaced by AI – and not just ChatGPT – are translating, managing social media, umpiring sports, and jobs in libraries and call centres.“However, while many people are nervously waiting for the world to become completely reliant on AI in the next few decades, there will always be a need for human force to drive this automation.

“Though there may be many jobs that will disappear in the near future due to AI replacement, jobseekers, employees and even employers can enhance and enrich their potential to ensure that their career stays current and in demand.

“The world’s workforce may combine man and machine, but a robot-dominated world is not about to become a practical reality yet,” Sithasanan says.

While the work landscape is evolving due to technology, so are the skills in need, says LinkedIn country manager for Malaysia Rohit Kalsy.

“LinkedIn research shows that top skills required for a particular job have changed by an average of 27% since 2015, with the pace of change accelerating during the pandemic.

“At this pace, skills could change by 43% to 47% by 2025.

“Between 2021 and 2025, we would likely see three new skills in the top skills for a job,” says Rohit, who is also the company’s head of emerging markets (South-East Asia).

However, there are encouraging signs that professionals in Malaysia are equipping themselves with a combination of hard and soft skills to enhance their employability and remain competitive.

“Malaysian learners were among the 7.3 million globally who enrolled in the top 20 most popular LinkedIn learning courses between June 1, 2021, and June 30 last year.

“This is almost double from the previous year. Such figures show that more are building skills to future-proof their careers,” Rohit points out.

Last month, the Human Resources Ministry said that, with the rise of AI use, as many as 4.5 million Malaysians are likely to lose their jobs by 2030 if they do not improve their skills or attend reskilling and upskilling programmes

By YUEN MEIKENG

 Source link

Related stories:

No plans to regulate AI yet

Robots say they won't steal jobs, rebel against humans

Skills in demand

 

🔥 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫 𝐌𝐚𝐬𝐭𝐞𝐫𝐬 𝐏𝐫𝐨𝐠𝐫𝐚𝐦 : https://www.edureka.co/masters-progra... (Use Code "𝐘𝐎𝐔𝐓𝐔𝐁𝐄𝟐𝟎")
🔥 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫 𝐌𝐚𝐬𝐭𝐞𝐫𝐬 𝐏𝐫𝐨𝐠𝐫𝐚𝐦 : https://www.edureka.co/masters-progra... (Use Code "𝐘𝐎𝐔𝐓𝐔𝐁𝐄𝟐𝟎") This Edureka video on "Artificial Intelligence Full Course" will provide you with a comprehensive and detailed knowledge of Artificial Intelligence concepts with hands-on examples. 00:00:00 Introduction 00:01:16 Agenda 00:02:22 What is AI 00:15:38 Examples of AI 00:21:16 Deep learning IS machine learning 00:26:53 AI Explained 00:28:19 AI Vs ML Vs DL 00:29:09 Importance of AI 00:31:23 Types of AI 00:32:53 Applications of AI 00:37:33 Domains of AI 00:40:21 Job Profiles in AI 00:43:54 Object Detection 00:55:28 How to become an AI Engineer 01:07:55 Stages of AI 01:12:54 Domains of AI 01:15:42 AI With Python 01:38:50 Introduction to ML 01:49:42 Types of ML 01:59:13 ML Algorithm 02:14:17 Limitations of ML 02:16:15 Introduction to DL 02:20:09 Use Cases of DL 02:43:05 NLP 02:47:13 What is NLP 02:48:13 Applications of NLP 03:05:56 TensorFlow Explained 03:15:41 TensorFlow 03:22:17 Hands-On 03:30:48 Convolutional Neural Networks 03:36:00 Convolutional Layer 03:46:44 Use Cases 03:56:25 What are Artificial Neural Networks 04:08:43 Training a Neural Network 04:20:05 Applications of Neural Network 04:23:11 Recurrent Neural Network 04:35:10 Long Short-Term Memory Networks 04:44:47 Long Short-Term Memory Networks - Use Case 04:52:04 Keras 05:02:40 Use Case With Keras 05:17:10 A* Algorithm in AI 05:41:52 Cognitive AI 05:46:51 COgnitive AI - Use Cases 05:50:14 Q Learning Explained 06:04:49 Transitioning to Q Learning 06:16:37 Water Jug Problem in AI 06:38:21 ChatGpt Explained 06:47:59 Dangers of AI 06:53:48 What AI is Like Right Now? 07:00:28 Mid-term dangers 07:08:43 What Does the Future Hold 07:10:41 Knowledge Representation in AI 07:26:10 Hill Climbing Algorithm 07:55:20 TOp 10 APplications of AI 08:09:59 Top 10 AI technologies 08:19:03 Top 10 Benefits of AI 08:30:52 AI Roadmap 08:42:21 AI Interview Questions & Answers 🔴 Subscribe to our channel to get video updates. Hit the subscribe button above: https://goo.gl/6ohpTV 🔴 𝐄𝐝𝐮𝐫𝐞𝐤𝐚 𝐎𝐧𝐥𝐢𝐧𝐞 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐚𝐧𝐝 𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 🔵 DevOps Online Training: http://bit.ly/3VkBRUT 🌕 AWS Online Training: http://bit.ly/3ADYwDY 🔵 React Online Training: http://bit.ly/3Vc4yDw 🌕 Tableau Online Training: http://bit.ly/3guTe6J 🔵 Power BI Online Training: http://bit.ly/3VntjMY 🌕 Selenium Online Training: http://bit.ly/3EVDtis 🔵 PMP Online Training: http://bit.ly/3XugO44 🌕 Salesforce Online Training: http://bit.ly/3OsAXDH 🔵 Cybersecurity Online Training: http://bit.ly/3tXgw8t 🌕 Java Online Training: http://bit.ly/3tRxghg 🔵 Big Data Online Training: http://bit.ly/3EvUqP5 🌕 RPA Online Training: http://bit.ly/3GFHKYB 🔵 Python Online Training: http://bit.ly/3Oubt8M 🌕 Azure Online Training: http://bit.ly/3i4P85F 🔵 GCP Online Training: http://bit.ly/3VkCzS3 🌕 Microservices Online Training: http://bit.ly/3gxYqqv 🔵 Data Science Online Training: http://bit.ly/3V3nLrc 🌕 CEHv12 Online Training: http://bit.ly/3Vhq8Hj 🔵 Angular Online Training: http://bit.ly/3EYcCTe 🔴 𝐄𝐝𝐮𝐫𝐞𝐤𝐚 𝐑𝐨𝐥𝐞-𝐁𝐚𝐬𝐞𝐝 𝐂𝐨𝐮𝐫𝐬𝐞𝐬 🔵 DevOps Engineer Masters Program: http://bit.ly/3Oud9PC 🌕 Cloud Architect Masters Program: http://bit.ly/3OvueZy 🔵 Data Scientist Masters Program: http://bit.ly/3tUAOiT 🌕 Big Data Architect Masters Program: http://bit.ly/3tTWT0V 🔵 Machine Learning Engineer Masters Program: http://bit.ly/3AEq4c4 🌕 Business Intelligence Masters Program: http://bit.ly/3UZPqJz 🔵 Python Developer Masters Program: http://bit.ly/3EV6kDv 🌕 RPA Developer Masters Program: http://bit.ly/3OteYfP 🔵 Web Development Masters Program: http://bit.ly/3U9R5va 🌕 Computer Science Bootcamp Program: http://bit.ly/3UZxPBy 🔵 Cyber Security Masters Program: http://bit.ly/3U25rNR 🌕 Full Stack Developer Masters Program: http://bit.ly/3tWCE2S 🔵 Automation Testing Engineer Masters Program: http://bit.ly/3AGXg2J 🌕 Python Developer Masters Program: https://bit.ly/3EV6kDv 🔵 Azure Cloud Engineer Masters Program: http://bit.ly/3AEBHzH 🔴 𝐄𝐝𝐮𝐫𝐞𝐤𝐚 𝐔𝐧𝐢𝐯𝐞𝐫𝐬𝐢𝐭𝐲 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐬 🌕 Professional Certificate Program in DevOps with Purdue University: https://bit.ly/3Ov52lT 🔵 Advanced Certificate Program in Data Science with E&ICT Academy, IIT Guwahati: http://bit.ly/3V7ffrh 🌕 Artificial and Machine Learning PGD with E&ICT Academy NIT Warangal: http://bit.ly/3OuZ3xs 📢📢 𝐓𝐨𝐩 𝟏𝟎 𝐓𝐫𝐞𝐧𝐝𝐢𝐧𝐠 𝐓𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐢𝐞𝐬 𝐭𝐨 𝐋𝐞𝐚𝐫𝐧 𝐢𝐧 2023 𝐒𝐞𝐫𝐢𝐞𝐬 📢📢 ⏩ NEW Top 10 Technologies To Learn In 2023 -    • Top 10 Technologi...   📌𝐓𝐞𝐥𝐞𝐠𝐫𝐚𝐦: https://t.me/edurekaupdates 📌𝐓𝐰𝐢𝐭𝐭𝐞𝐫: https://twitter.com/edurekain 📌𝐋𝐢𝐧𝐤𝐞𝐝𝐈𝐧: https://www.linkedin.com/company/edureka 📌𝐈𝐧𝐬𝐭𝐚𝐠𝐫𝐚𝐦: https://www.instagram.com/edureka_lea... 📌𝐅𝐚𝐜𝐞𝐛𝐨𝐨𝐤: https://www.facebook.com/edurekaIN/ 📌𝐒𝐥𝐢𝐝𝐞𝐒𝐡𝐚𝐫𝐞: https://www.slideshare.net/EdurekaIN Got a question on the topic? Please share it in the comment section below and our experts will answer. Please write to us at sales@edureka.co or call us at IND: 9606058406 / US: 18338555775 (toll-free) for more information.
 

Related posts:

 

OpenAI CEO calls for global cooperation on AI regulation, says ‘China has some of the best AI talent in the world’

 

 

 

 

AI Superpowers: China, Silicon Valley, and the New World Order; Singapore tries its own path in clash

THE NEW YORK TIMES , USA TODAY , AND WALL STREET JOURNAL BESTSELLER

Monday, February 13, 2023

Lies, racism and AI: IT experts point to serious flaws in ChatGPT

 


 ChatGPT may have blown away many who have asked questions of it, but scientists are far less enthusiastic. Lacking data privacy, wrong information and an apparent built-in racism are just a few of the concerns some experts have with the latest 'breakthrough' in AI. — Photo: Frank Rumpenhorst/dpa

BERLIN: ChatGPT may have blown away many who have asked questions of it, but scientists are far less enthusiastic. Lacking data privacy, wrong information and an apparent built-in racism are just a few of the concerns some experts have with the latest 'breakthrough' in AI.

With great precision, it can create speeches and tell stories – and in just a matter of seconds. The AI software ChatGPT introduced late last year by the US company OpenAI is arguably today's number-one worldwide IT topic.

But the language bot, into which untold masses of data have been fed, is not only an object of amazement, but also some scepticism.

Scientists and AI experts have been taking a close look at ChatGPT, and have begun issuing warnings about major issues – data protection, data security flaws, hate speech, fake news.

"At the moment, there's all this hype," commented Ruth Stock-Homburg, founder of Germany's Leap in Time Lab research centre and a Darmstadt Technical University business administration professor. "I have the feeling that this system is scarcely being looked at critically."

"You can manipulate this system"

ChatGPT has a very broad range of applications. In a kind of chat field a user can, among others, ask it questions and receive answers. Task assignments are also possible – for example on the basis of some fundamental information ChatGPT can write a letter or even an essay.

In a project conducted together with the Darmstadt Technical University, the Leap in Time Lab spent seven weeks sending thousands of queries to the system to ferret out any possible weak points. "You can manipulate this system," Stock-Homburg says.

In a recent presentation, doctoral candidate and AI language expert Sven Schultze highlighted the weak points of the text bot. Alongside a penchant for racist expressions, it has an approach to sourcing information that is either erroneous or non-existent, Schultze says. A question posed about climate change produced a link to an internet page about diabetes.

"As a general rule the case is that the sources and/or the scientific studies do not even exist," he said. The software is based on data from the year 2021. Accordingly, it identifies world leaders from then and does not know about the war in Ukraine.

"It can then also happen that it simply lies or, for very specialised topics, invents information," Schultze said.

Sources are not simple to trace

He noted for example that with direct questions containing criminal content there do exist security instructions and mechanisms. "But with a few tricks you can circumvent the AI and security instructions," Schultze said.

With another approach, you can get the software to show how to generate fraudulent emails. It will also immediately explain three ways that scammers use the so-called "grandchild trick" on older people.

ChatGPT also can provide a how-to for breaking into a home, with the helpful advice that if you bump into the owner you can use weapons or physical force on them.

Ute Schmid, Chair of Cognitive Systems at the Otto Friedrich University in Bamberg, says that above all the challenge is that we can't find out how the AI reaches its conclusions. "A deeper problem with the GPT3 model lies in the fact that it is not possible to trace when and how which sources made their way into the respective statements," she said.

Despite such grave shortcomings, Schmidt still argues that the focus should not just concern the mistakes or possible misuse of the new system, the latter prospect being students having their homework or research papers written by the software. "Rather, I think that we should ask ourselves, what chances are presented us with such AI systems?"

Researchers in general advocate how AI can expand – possibly even promote – our competencies, and not limit them. "This means that in the area of education I must also ask myself – as perhaps was the case 30 years ago with pocket calculators – how can I shape education with AI systems like ChatGPT?"

Data privacy concerns

All the same, concerns remain about data security and protecting data. "What can be said is that ChatGPT takes in a variety of data from the user, stores and processes it and then at a given time trains this model accordingly," says Christian Holthaus, a certified data protection expert in Frankfurt. The problem is that all the servers are located in the United States.

"This is the actual problem – if you do not succeed in establishing this technology in Europe, or to have your own," Holthaus said. In the foreseeable future there will be no data protection-compliant solution. Adds Stock-Homburg about European Union data protection regulations: "This system here is regarded as rather critical."

ChatGPT was developed by OpenAI, one of the leading AI firms in the US. Software giant Microsoft invested US$1bil (RM4.25bil) in the company back in 2019 and recently announced plans to pump further billions into it. The concern aims to make ChatGPT available to users of its own cloud service Azure and the Microsoft Office package.

"Still an immature system"

Stock-Homburg says that at the moment ChatGPT is more for private users to toy around with – and by no means something for the business sector or security-relevant areas. "We have no idea how we should be deal with this as yet still immature system," she said.

Oliver Brock, Professor of Robotics and Biology Laboratory at the Technical University Berlin, sees no "breakthrough" yet in AI research. Firstly, development of AI does not go by leaps and bounds, but is a continuing process. Secondly, the project only represents a small part of AI research.

But ChatGPT might be regarded as a breakthrough in another area – the interface between humans and the internet. "The way in which, with a great deal of computing effort, these huge amounts of data from the internet are made accessible to a broad public intuitively and in natural language can be called a breakthrough," says Brock. – dpa    

By Oliver Pietschmann, Christoph Dernbach

Source link

 

Related posts:

 

  H ow Scientists Predict Where Earthquakes Will Strike Next The pair of earthquakes that hit Turkey and Syria this week left the region .
 
  OpenAI, which Elon Musk helped to co-found back in 2015, is the San Francisco-based startup that created ChatGPT. The company opened Ch...
 

 Microsoft is rolling out an intelligent chatbot to live alongside Bing’s search results, putting AI that can summarise web pages, synthesis...