The rationale behind the Dataietica approach is that in order to guarantee the integrity of human beings’ civil liberties and privacy while at the same time taking advantage of the positive impact that AI can have on protecting human lives, all players in the AI security space – Law Enforcement, AI researchers, developers and regular citizens – must be taken into account and the concerns and needs of each fostered and stimulated appropriately.
In many cases, the debate around ethics, transparency and trust in AI is limited to the niche of either research or is left as an open question for AI developers and practitioners.
Dataietica brings together all impacted groups through research, training and awareness building efforts designed to help advance our mission.
The capacity of AI to change the nature of policing and improve performance and effectiveness, such as identifying persons of interest in crowded spaces, forecasting and predicting crime; and monitoring for drivers of violent extremism is only beginning to be seen. Here are few examples of how AI can keep us safe.
Dataietica offers pro-bono advice to AI practitioners and researchers looking to implement a more sound strategy for protection against bias and opaqueness in their projects.
We also counsel Law Enforcement and governments on best practices for implementing AI solutions for public safety purposes.
Dataietica also aims to be a resource for private citizens who are curious about, or even concerned about, the potential that AI has for public safety.
Bias can be built into AI, and algorithmic fairness is something all stakeholders should be concerned about. To achieve fairness there needs to be careful consideration of the wider operational, organisational and legal context, but also the potential negative personal impacts that opaque and biased AI can have on citizens. Our research aims to tackle these important issues.
The potential of AI for keeping society safe and protecting human life can only be fully realised with the trust, buy-in and cooperation of citizens, who are impacted by these technologies. This consent can only happen if regular people understand how AI works, its potential positive impact and its pitfalls – through education and full transparency.
Law enforcement authorities face a number of challenges in AI use: they must work to ensure that their use of AI complies with the Fairness, Accountability, Transparency and Explanation (FATE) criteria in order to be able have confidence in both its efficacy as well as its legality, but they often lack sufficient information. We believe that supporting LEA in this regard will help advance fairness in policing.
In many cases, the debate around ethics, transparency and trust in AI is limited to the niche of either research or is left as an open question for AI developers and practitioners. We support AI practitioners working in Security in the adoption of ethical algorithm development and human-first AI innovations.
We offer training for LEA and government stakeholders in the use of AI for security and policing, with a focus on ethical and transparent foundations.
We develop training and resources for AI practitioners and researchers looking to better understand the pitfalls of lack of transparency and bias in the development of AI tools for security purposes.
We help citizens understand AI for Security as well as the potential implications it might have on them, so that we can all be fully informed.