11-21-2024
BSV
$68.29
Vol 212.11m
-1.08%
BTC
$98367
Vol 125955.31m
4.38%
BCH
$481.82
Vol 2203.17m
7.87%
LTC
$89.23
Vol 1411.39m
5.69%
DOGE
$0.38
Vol 9423.35m
1.84%
Getting your Trinity Audio player ready...

LinkedIn is using your data to train AI without your knowledge

Recently, it was revealed that LinkedIn has been taking its users’ data and using it to train their AI—without the users’ knowledge. LinkedIn automatically opted user accounts into giving the company permission to use their data to train its AI models before it rolled out a statement informing users that:

“We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.”

LinkedIn has since given users various ways to remove this permission and opt out of having their data used to train LinkedIn AI and to ease users’ fears. The company has released an FAQ stating that it uses “privacy-enhancing technologies to redact or remove personal data” from its training sets and that it doesn’t train its models on those who live in the EU, EEA, or Switzerland (due to GDPR and the uphill battle AI companies face given the policies around data and privacy).

Many of us don’t know what we’ve given companies permission to do when it comes to data. I mean, how many of you actually read the entire terms of service document when signing up for an app or service? This isn’t a big deal in most cases, especially for ‘regular’ app users. However, for corporations, it’s a different story.

Corporations are typically fearful of putting their data into AI chatbots and often have policies prohibiting their employees from inputting proprietary information into these AI products. This is because you never really know where your data will end up or how it is being used. In LinkedIn’s case, they say the data they train their AI on subsequently gets used in its generative AI product, which powers tools like its writing assistant. That being said, if sensitive or proprietary is used to train a language model, you can bet that it will appear in one of the chatbot’s outputs one day—even if it isn’t blatant. This creates all sorts of risks for corporations, especially those that handle sensitive information like personal identification and financial information.

California passes AI deepfake laws to combat election misinformation

California Governor Gavin Newsom has signed three artificial intelligence-related bills into effect that aim to crack down on illicit uses of AI around the upcoming United States Presidential election. A new law from these bills makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials and impose civil penalties. Another one of the bills requires political campaigns to publicly disclose if they are running ads with materials altered by AI—but will this be enough?

It can be difficult to enforce laws that relate to activities that take place digitally. There aren’t always fast or efficient ways to track down the individuals where these activities originate; in some cases, they are never caught or identified. The bill regarding creating and publishing deepfakes also seems a bit lenient—why 120 days before the election and 60 days afterward instead of all the time?

Although these bills sound like a step in the right direction, the fact that they only apply to residents of California and matters related to California makes them significantly limiting. This means that someone in another state would not be subject to the same legal consequences as someone in California if they posted prohibited content under California’s new law, especially if the content wasn’t affecting or influencing California residents. For the bills to be more effective, they would need to be adopted across the United States, but beyond that, there would need to be a better way to identify the focal point of digital crime so that the perpetrators are more likely to be caught and face the consequences.

Amazon launches AI video generator, Amelia AI tool at Amazon Accelerate Conference

During its annual conference for sellers, Amazon Accelerate, Amazon Ads (NASDAQ: AMZN) announced the launch of an AI video generator that transforms a single product image into a full-fledged marketing video. Amazon says they launched this product in response to a customer pain point that revolves around the challenges many businesses face in video marketing; a recent study from Wyzowl reported that 89% of consumers want to see more videos from brands in 2024, yet businesses cited lack of time and cost as top barriers to providing video.

Amazon Ads aims for the video generator to solve this problem, allowing users to create videos and live images using still photos of their products.

In addition to the video generator, Amazon announced the launch of an AI tool called Amelia for third-party sellers. This group accounts for over half of all goods sold on Amazon. Amelia will help sellers troubleshoot various problems they may experience, such as resolving account issues, filing claims, or addressing any other challenges that arise.

These types of tools move the ball forward for what I will call the average person. Consumers are still using AI in very rudimentary ways and gravitate toward AI chatbots for their ability to quickly answer questions and provide information that directly solves user problems, whether in their personal or work lives. Consumers are the group that made this wave of AI popular, and providing the tools and resources they need is likely to be an effort that will see success without much resistance.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Improving logistics, finance with AI & blockchain

Recommended for you

BIT Mining hit with $10M fine over bribery charges
In its previous existence as a casino and sports lottery firm, BIT Mining reportedly paid $2 million in bogus consultation...
November 21, 2024
Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
Advertisement
Advertisement
Advertisement