Harvard's school of public policy is publishing a Misinformation Review for peer-reviewed, scholarly articles promising "reliable, unbiased research on the prevalence, diffusion, and impact of misinformation worldwide."
This week it reported that "Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI." They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing.
The resulting enhanced potential for malicious manipulation of society's evidence base, particularly in politically divisive domains, is a growing concern... [T]he abundance of fabricated "studies" seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. A second risk lies in the increased possibility that convincingly scientific-looking content was in fact deceitfully created with AI tools and is also optimized to be retrieved by publicly available academic search engines, particularly Google Scholar. However small, this possibility and awareness of it risks undermining the basis for trust in scientific knowledge and poses serious societal risks.
"Our analysis shows that questionable and potentially manipulative GPT-fabricated papers permeate the research infrastructure and are likely to become a widespread phenomenon..." the article points out.
"Google Scholar's central position in the publicly accessible scholarly communication infrastructure, as well as its lack of standards, transparency, and accountability in terms of inclusion criteria, has potentially serious implications for public trust in science. This is likely to exacerbate the already-known potential to exploit Google Scholar for evidence hacking..."
Alorica — which runs customer-service centers around the world — has introduced an AI translation tool that lets its representatives talk with customers in 200 different languages. But according to the Associated Press, "Alorica isn't cutting jobs. It's still hiring aggressively." The experience at Alorica — and at other companies, including furniture retailer IKEA — suggests that AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy. Nick Bunker, an economist at the Indeed Hiring Lab, said he thinks AI "will affect many, many jobs — maybe every job indirectly to some extent. But I don't think it's going to lead to, say, mass unemployment.... "
[T]he widespread assumption that AI chatbots will inevitably replace service workers, the way physical robots took many factory and warehouse jobs, isn't becoming reality in any widespread way — not yet, anyway. And maybe it never will. The White House Council of Economic Advisers said last month that it found "little evidence that AI will negatively impact overall employment.'' The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways... The outplacement firm Challenger, Gray & Christmas, which tracks job cuts, said it has yet to see much evidence of layoffs that can be attributed to labor-saving AI. "I don't think we've started seeing companies saying they've saved lots of money or cut jobs they no longer need because of this,'' said Andy Challenger, who leads the firm's sales team. "That may come in the future. But it hasn't played out yet.''
At the same time, the fear that AI poses a serious threat to some categories of jobs isn't unfounded. Consider Suumit Shah, an Indian entrepreneur who caused a uproar last year by boasting that he had replaced 90% of his customer support staff with a chatbot named Lina. The move at Shah's company, Dukaan, which helps customers set up e-commerce sites, shrank the response time to an inquiry from 1 minute, 44 seconds to "instant." It also cut the typical time needed to resolve problems from more than two hours to just over three minutes. "It's all about AI's ability to handle complex queries with precision,'' Shah said by email. The cost of providing customer support, he said, fell by 85%....
Similarly, researchers at Harvard Business School, the German Institute for Economic Research and London's Imperial College Business School found in a study last year that job postings for writers, coders and artists tumbled within eight months of the arrival of ChatGPT.
On the other hand, after Ikea introduced a customer-service chatbot in 2021 to handle simple inquiries, it didn't result in massive layoffs according to the article. Instead Ikea ended up retraining 8,500 customer-service workers to handle other tasks like advising customers on interior design and fielding complicated customer calls.
Security engineer Bryan Hance co-founded the nonprofit Bike Index, back in 2013, reports the Los Angeles Times, "where cyclists can register their bikes and contact information, making it easier to reunite lost or stolen bikes with their owners." It now holds descriptions and serial numbers of about 1.3 million bikes worldwide.
"But in spring 2020, Hance was tipped to something new: Scores of high-end bikes that matched the descriptions of bikes reported stolen from locations across the Bay Area were turning up for sale on Facebook Marketplace and Instagram pages attached to someone in Mexico, thousands of miles away..." The Facebook page he first spotted disappeared, replaced by pages that were blocked to U.S. computers; Hance managed to get in anyway, thanks to creative use of a VPN. He started reaching out to the owners whose stolen bikes he suspected he was seeing for sale. "Can you tell me a little bit about how your bike was stolen," he would ask. Often, the methods were sophisticated and selective. Thieves would break into a bicycle room at an apartment complex with a specialized saw and leave minutes later with only the fanciest mountain bikes...
Over time, he spoke to more than a dozen [police] officers in jurisdictions across the Bay Area, including Alameda, Santa Clara, Santa Cruz, Marin, Napa and Sonoma counties... [H]ere was Hance, telling officers that he believed he had located a stolen bike, in Mexico. "That's gone," the officer would inform him. Or, one time, according to Hance: "We're not Interpol." Hance also tried to get Meta to do something. After all, he had identified what could be hundreds of stolen bikes being sold on its platforms, valued, he estimated, at well over $2 million. He said he got nowhere...
[Hance] believed he'd figured out the identity of the seller in Jalisco, and was monitoring that person's personal social media accounts. In early 2021, he had spotted something that might break open the case: the name of a person who was sending the Jalisco seller photos of bikes that matched descriptions of those reported stolen by Bay Area cyclists. Hance theorized that person could be a fence who was collecting stolen bikes on this side of the border and sending photos to Jalisco so they could be posted for sale. Hance hunted through the Jalisco seller's Facebook friends until he found the name there: Victor Romero, of San Jose. More sleuthing revealed that a man by the name of Victor Romero ran an auto shop in San Jose, and, judging by his own Facebook photos, was an avid mountain biker. There was something else: Romero's auto shop in San Jose had distinctive orange shelves. One photo of a bike listed for sale on the Jalisco seller's site had similar orange shelves in the backdrop.
Hance contacted a San Francisco police detective who had seemed interested in what he was doing. Check out this guy's auto shop, he advised. San Francisco police raided Romero in the spring of 2021. They found more than $200,000 in cash, according to a federal indictment, along with screenshots from his phone they said showed Romero's proceeds from trafficking in stolen bikes. They also found a Kona Process 153 mountain bike valued at about $4,700 that had been reported stolen from an apartment garage in San Francisco, according to the indictment. It had been disassembled and packaged for shipment to Jalisco.
In January, a federal grand jury indicted Victoriano Romero on felony conspiracy charges for his alleged role in a scheme to purchase high-end stolen bicycles from thieves across the Bay Area and transport them to Mexico for resale.
But bikes continue to be stolen, and "The guy is still operating," Hance told the Los Angeles Times.
"We could do the whole thing again."
Amazon's Audible will begin inviting a select group of US-based audiobook narrators to train AI on their voices, the clones of which can then be used to make audiobook recordings. From a report: The effort, which kicks off next week, is designed to add more audiobooks to the service, quickly and cheaply -- and to welcome traditional narrators into the evolving world of audiobook automation which, to date, many have regarded warily. Last year, Audible began offering US-based, self-published authors who make their books available on the Kindle Store the option of having their works narrated by a generic "virtual voice." The initiative has been popular. As of May, more than 40,000 books in Audible were marked as having made use of the technology. Under the new arrangement, rather than limiting the audio work entirely to company-owned synthetic voices, Audible will be encouraging professional narrators to get in on the action.
I'm even more impressed by their subtle psychological tricks. Each step of the way, they left out information which required me to ask for something if I wanted to proceed. It's a lot easier to be on your guard when others are asking you for things. When you're the one doing the asking, it's even harder to say something when things look strange, because you may already feel like you're being a burden on their time. For the initial ad, they left out the phone number so I had to ask. After they told me I could look at their airbnb site, I had to ask for a link. Then, after they sent me to search on Airbnb's site, I had to ask for the link again! That was deliberately planned! Throughout these interactions, they mentioned there were other people looking, maintaining a plausible sense of urgency. Finally, using Airbnb as the phishing site was clever, because it gave the impression of a trusted middleman. I was genuinely thrown off at first, because I couldn't figure out how they were planning to steal my financial information. If they had just asked for bank or credit card information early on, their game would have been easy to spot.