checkAd

     181  0 Kommentare TELUS International Survey Reveals Nearly Two-Thirds of Consumers Not Aware Media Companies Restrict Generative AI (GenAI) Models From Being Trained on Their Articles and Content

    More than half (55%) of consumers surveyed believe they understand how Generative AI (GenAI) models are trained. However, nearly two-thirds (60%) were not aware that some media companies (recently, The New York Times) have restricted access to its information and data, including articles and general site content, in the training of GenAI models.

    That’s according to a recent TELUS International (NYSE and TSX: TIXT) survey of 1,000 U.S. adults who are familiar with GenAI.

    On the impact of GenAI not being informed by media companies, over half of all consumers indicate concerns the content will be inaccurate (54%) and biased (59%). When asked what alternative sources they would most likely trust to educate and inform GenAI models, higher education institutions (48%) and scientific journals (44%) were the top choices. Conversely, the most untrusted sources were social media conversations (45%), reviews websites (27%) and brand websites (25%).

    Transparency And Responsibility Lies With Brands

    “There is growing concern by media companies and content creators about what becomes of their intellectual property when it is used as source material to train GenAI models, so naturally they are beginning to set guardrails. Many media companies have already updated their terms and conditions to include rules that forbid its content from being used to train AI systems and are blocking AI web crawlers from accessing their text, images, audio and video clips, and photos,” said Siobhan Hanna, VP and Managing Director, AI Data Solutions, TELUS International. “Given that we are in the early stages of developing industry regulations for all aspects of GenAI, including the sourcing of data, it's crucial that companies take responsibility to do the right thing from the very beginning. To protect themselves from potential fines, penalties, legal action and negative brand impacts, those working on AI deliverables must carefully consider where they are scraping or otherwise extracting the data from to power their models. Moreover, this is where a ‘humanity-in-the-loop’ approach to AI is so critical. Despite the fact that regulations and permissions around copyrighted material may still be emerging, companies need to consider the broader societal impacts of their actions to ensure that they are operating in a fair and ethical manner.”

    Seite 1 von 3


    Aktuelle Themen


    Business Wire (engl.)
    0 Follower
    Autor folgen

    TELUS International Survey Reveals Nearly Two-Thirds of Consumers Not Aware Media Companies Restrict Generative AI (GenAI) Models From Being Trained on Their Articles and Content More than half (55%) of consumers surveyed believe they understand how Generative AI (GenAI) models are trained. However, nearly two-thirds (60%) were not aware that some media companies (recently, The New York Times) have restricted access to its …

    Schreibe Deinen Kommentar

    Disclaimer