• Meta reveals what kinds o

    From Mike Powell@1:2320/105 to All on Wed Feb 5 10:07:00 2025
    Meta reveals what kinds of AI even it would think too risky to release

    Date:
    Tue, 04 Feb 2025 15:00:00 +0000

    Description:
    Meta wants to work with industry experts, governments, and others to stop AI from becoming a problem to society.

    FULL STORY ======================================================================
    - Meta makes its Frontier AI Framework available to all
    - The company says it is concerned about AI-induced cybersecurity threats
    - Risk assessments and modeling will categorize AI models as critical,
    high, or moderate

    Meta has revealed some concerns about the future of AI despite CEO Mark Zuckerbergs well-publicized intentions to make artificial general
    intelligence (AGI) openly available to all.

    The company's newly-released Frontier AI Framework explores some critical
    risks that AI could pose, including its potential implications on
    cybersecurity and chemical and biological weapons.

    By making its guidelines publicly available, Meta hopes to collaborate with other industry leaders to anticipate and mitigate such risks by identifying potential catastrophic outcomes and threat modeling to establish thresholds.

    Meta wants to prevent catastrophic AI outcomes

    Stating, open sourcing AI is not optional; it is essential, Meta outlined in
    a blog post how sharing research helps organizations learn from each others assessments and encourages innovation.

    Its framework works by proactively running periodic threat modeling exercises to complement its AI risk assessments modeling will also be used if and when an AI model is identified to potentially exceed current frontier
    capabilities, where it becomes a threat.

    These processes are informed by internal and external experts, resulting in
    one of three negative categories: critical, where the development of the
    model must stop; high, where the model in its current state must not be released; and moderate, where further consideration is given to the release strategy.

    Some threats include the discovery and exploitation of zero-day vulnerabilities, automated scams and frauds and the development of
    high-impact biological agents.

    In the framework, Meta writes: While the focus of this Framework is on our efforts to anticipate and mitigate risks of catastrophic outcomes, it is important to emphasize that the reason to develop advanced AI systems in the first place is because of the tremendous potential for benefits to society
    from those technologies.

    The company has committed to updating its framework with the help of
    academics, policymakers, civil society organizations, governments, and the wider AI community as the technology continues to develop.

    ======================================================================
    Link to news story: https://www.techradar.com/pro/meta-reveals-what-kinds-of-ai-even-it-would-thin k-too-risky-to-release

    $$
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Aaron Thomas@1:342/201 to Mike Powell on Wed Feb 5 13:38:12 2025
    Meta has revealed some concerns about the future of AI despite CEO Mark Zuckerbergs well-publicized intentions to make artificial general intelligence (AGI) openly available to all.

    Based on the summary of this article, it sounds like Zuckerberg is interested in policing the world of AI development "because bad stuff might happen if he doesn't."

    It also reminds me of the not too distant past when he stressed the importance of policing fake accounts on Facebook. "Bad stuff would happen."

    --- Mystic BBS v1.12 A49 2023/04/30 (Windows/64)
    * Origin: JoesBBS.Com, Telnet:23 SSH:22 HTTP:80 (1:342/201)
  • From Mike Powell@1:2320/105 to AARON THOMAS on Thu Feb 6 09:13:00 2025
    Based on the summary of this article, it sounds like Zuckerberg is interested in policing the world of AI development "because bad stuff might happen if he doesn't."

    It also reminds me of the not too distant past when he stressed the importance
    of policing fake accounts on Facebook. "Bad stuff would happen."

    It is sort of all related. Zuckerberg is not the only one who is warning
    of the possibilities of unregulated AI. It can, and is already, being used
    in cyberattacks and scams that are going to cost victims, companies, and eventually taxpayers millions.


    * SLMR 2.1a * Mistress - something between a mister and a mattress.
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Aaron Thomas@1:342/201 to Mike Powell on Thu Feb 6 15:32:46 2025
    It also reminds me of the not too distant past when he stressed the importance
    of policing fake accounts on Facebook. "Bad stuff would happen."

    It is sort of all related. Zuckerberg is not the only one who is warning of the possibilities of unregulated AI. It can, and is already, being used in cyberattacks and scams that are going to cost victims,
    companies, and eventually taxpayers millions.

    I'll take your word for it on that, but beware of the "we need to borrow 3 trillion dollars to combat misuse of AI" scam that is imminent.

    --- Mystic BBS v1.12 A49 2023/04/30 (Windows/64)
    * Origin: JoesBBS.Com, Telnet:23 SSH:22 HTTP:80 (1:342/201)
  • From Mike Powell@1:2320/105 to AARON THOMAS on Fri Feb 7 09:42:00 2025
    It is sort of all related. Zuckerberg is not the only one who is warning
    of the possibilities of unregulated AI. It can, and is already, being used in cyberattacks and scams that are going to cost victims, companies, and eventually taxpayers millions.

    I'll take your word for it on that, but beware of the "we need to borrow 3 trillion dollars to combat misuse of AI" scam that is imminent.

    You don't have to take my word for it. I have posted articles about it.
    That said, yes, it would be an opportune time for someone to take advantage
    of the issue in order to get $$$ from the government.

    Mike


    * SLMR 2.1a * A KGB keyboard has no ESC key.
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)