Amazon cover image
Image from Amazon.com
Image from Google Jackets

Fundamental rights and artificial intelligence: Under the constitutional framework

By: Material type: TextTextPublication details: New Delhi Law & Justice Publishing 2025Description: 79 pISBN:
  • 9789348076144
Subject(s): DDC classification:
  • 006.3 ATR
Summary: [Fundamental Rights and Artificial Intelligence] In this Book we are considering creation of Artificial Intelligence (Al for short) creation and its unfair and harmful effect on human. Al safety begins with ensuring that deployed Al system align with its intended goals and remain friendly towards social objectives. Where Al system acts in unintended and harmful way, it poses significant risk with respect to Industries. Al tools are often reported to produce biased or inappropriate outputs due to flaws in training data or insufficient oversight mechanism. The use of autonomous vehicles or long or short range drones without pilots save the life of driver or pilot, but endangered the lives of hundreds of people in case of accidents or the area where the drone is exploded. Key risks include hallucinations, copyrights violations, misinformation, goal misalignments in human -Al interactions; Language models generating plausible but entirely false claims, potentially amplifying the spread of fake news. The Wadhwani School of Data Science and Artificial Intelligence is of the opinion that achieving Al alignment requires better training data; robust model tasting and regular updates to adjust evolving favorable social environment. In most cases highly successful Al models cannot assure full hundred percent security. The safety of Al system extends beyond its models/algorithms. It therefore should include the process of data collection, data filtering, implementation of output guardrails and continuous monitoring after deployment. Al systems can unintentionally damage cultural atmosphere, displacement of livelihoods (in case of bombardment through Air drones strikes.) and social inequalities in status. Without careful consideration of use of Al models indiscriminately have an effect of changing cultural values beyond repairs. The greatest danger is reducing a great manual work force value less without the capacity of not earning a penny. The Al developers must necessarily adopt a careful and comprehensive framework for Al safety. This includes improving Al alignment to mitigate obvious current risks, ensuring the safety of entire Al systems rather than focusing solely on individual models and taking into account the broader social impacts of Al technologies.....
List(s) this item appears in: New Arrivals December 2025
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Books Books Gandhi Smriti Library 006.3 ATR (Browse shelf(Opens below)) Available 185050
Total holds: 0

[Fundamental Rights and Artificial Intelligence] In this Book we are considering creation of Artificial Intelligence (Al for short) creation and its unfair and harmful effect on human. Al safety begins with ensuring that deployed Al system align with its intended goals and remain friendly towards social objectives. Where Al system acts in unintended and harmful way, it poses significant risk with respect to Industries. Al tools are often reported to produce biased or inappropriate outputs due to flaws in training data or insufficient oversight mechanism. The use of autonomous vehicles or long or short range drones without pilots save the life of driver or pilot, but endangered the lives of hundreds of people in case of accidents or the area where the drone is exploded. Key risks include hallucinations, copyrights violations, misinformation, goal misalignments in human -Al interactions; Language models generating plausible but entirely false claims, potentially amplifying the spread of fake news. The Wadhwani School of Data Science and Artificial Intelligence is of the opinion that achieving Al alignment requires better training data; robust model tasting and regular updates to adjust evolving favorable social environment. In most cases highly successful Al models cannot assure full hundred percent security. The safety of Al system extends beyond its models/algorithms. It therefore should include the process of data collection, data filtering, implementation of output guardrails and continuous monitoring after deployment. Al systems can unintentionally damage cultural atmosphere, displacement of livelihoods (in case of bombardment through Air drones strikes.) and social inequalities in status. Without careful consideration of use of Al models indiscriminately have an effect of changing cultural values beyond repairs. The greatest danger is reducing a great manual work force value less without the capacity of not earning a penny. The Al developers must necessarily adopt a careful and comprehensive framework for Al safety. This includes improving Al alignment to mitigate obvious current risks, ensuring the safety of entire Al systems rather than focusing solely on individual models and taking into account the broader social impacts of Al technologies.....

There are no comments on this title.

to post a comment.

Powered by Koha