The rationale behind the Dataietica approach is that in order to guarantee the integrity of human beings’ civil liberties and privacy while at the same time taking advantage of the positive impact that AI can have on protecting human lives, all players in the AI security space – Law Enforcement, AI researchers, developers and regular citizens – must be taken into account and the concerns and needs of each fostered and stimulated appropriately.
In many cases, the debate around ethics, transparency and trust in AI is limited to the niche of either research or is left as an open question for AI developers and practitioners.
Dataietica brings together all impacted groups through research, training and awareness building efforts designed to help advance our mission.
The capacity of AI to change the nature of policing and improve performance and effectiveness, such as identifying persons of interest in crowded spaces, forecasting and predicting crime; and monitoring for drivers of violent extremism is only beginning to be seen. Here are few examples of how AI can keep us safe.
Dataietica offers pro-bono advice to AI practitioners and researchers looking to implement a more sound strategy for protection against bias and opaqueness in their projects.
We also counsel Law Enforcement and governments on best practices for implementing AI solutions for public safety purposes.
Dataietica also aims to be a resource for private citizens who are curious about, or even concerned about, the potential that AI has for public safety.
We offer training for LEA and government stakeholders in the use of AI for security and policing, with a focus on ethical and transparent foundations.
We develop training and resources for AI practitioners and researchers looking to better understand the pitfalls of lack of transparency and bias in the development of AI tools for security purposes.
We help citizens understand AI for Security as well as the potential implications it might have on them, so that we can all be fully informed.