2025 Blockathon: Blockchain & Responsible AI – July 17-19th 2025
For the second year in a row, the Blockathon for Social Good will focus on Blockchain & Responsible AI.
Artificial intelligence (AI) has rapidly developed in the past two years, largely due to the widespread adoption of ChatGPT and other large language models. This progress has led industry experts and governments to anticipate the dawn of an AI era characterized by heightened automation, efficiency, and productivity. Startups and established organizations have begun to invest billions of dollars into AI development, ranging from infrastructure, industry software, to consumer applications. Many industries are attempting to adopt AI, for example, the pharmaceutical industry seeks to use AI to accelerate drug discovery and the finance industry aims to use AI to automate banking and trading,
Despite AI's immense potential to revolutionize industries and economies, scientists have warned of its risks. Biased AI algorithms, for instance, can reinforce discrimination, while the lack of transparency in AI training models raises concerns about data privacy and usage. Moreover, AI may disrupt the labor market, render many occupations irrelevant, and erode the boundary between human lives and machinery. To harness the benefits of AI while mitigating risks, governments worldwide are actively exploring AI regulations.
- Recently, the concept of responsible AI has gained much attention among scientists and industry leaders. According to IBM, responsible AI refers to “a set of principles guiding the design, development, deployment, and use of AI, fostering trust in solutions that empower organizations and stakeholders” (https://www.ibm.com/topics/responsible-ai). Specifically, responsible AI entails that:
AI systems must be aligned with ethical principles, human values, and societal norms. This includes respect for human dignity, privacy, autonomy, fairness, transparency, accountability, and non-discrimination. - AI systems need to be explainable and transparent. Developers should be able to explain how they develop AI models, what data they use, how they access the data, and what factors influence outcomes.
- AI systems should be fair. It should not reinforce social bias based on gender, ethnicity, sexual orientation, disability etc, especially when it comes to hiring and accessing healthcare.
- Responsible AI should not evade people’s rights to privacy. AI systems should only collect, use, and retain personal data for legitimate purposes and with individuals' informed consent, and should not expose people’s sensitive information.
- Responsible AI prioritizes security and should be resistant to various security attacks.
- AI systems must be held accountable. Mechanisms for auditing, oversight, and recourse in case of adverse impacts or violations must be in place.
Experts argue that blockchain technology can play a pivotal role in realizing responsible AI. Blockchain, a distributed ledger characterized by immutability, transparency, and a tamper-proof nature, offers advantages in enhancing the fairness of AI systems, for example, through greater security, privacy, and transparency. By integrating blockchain with AI systems, users gain visibility into data sources and model training processes, while regulators can audit compliance with regulatory requirements. Some initiatives aim to develop blockchain-based data marketplaces for AI, where individuals retain control over their data and smart contracts automate data sharing. Others propose applying decentralized governance structures such as DAOs to AI development, ensuring democratic decision-making and reflecting the interests of the majority. The emerging field of decentralized physical infrastructure has much overlap with AI development, providing decentralized environments to enable greater security, privacy, and accountability in AI development.
In this year’s Blockathon for Social Good, students will build upon the progress made in 2024. Students will be tasked with addressing a specific challenge relating to responsible AI that blockchain can solve, by enhancing the capabilities of ClioX, a prototype solution to establish a fair data AI ecosystem over blockchain for historical archives and other cultural institutions. ClioX builds upon the PontusX ecosystem, which brings together a multitude of companies and institutions from different fields and industries, such as AirBus, EUPro Gigant, DeltaDAO, Cooperants, Gaia-X, Exoscale, TU Wien, IONOS, Wobcom, Stats Bibliotek zu Berlin, TU Darmstadt, BigChain DB, Materna, Neusta Aerospace, S Software AG, Exaion, and Arsys.
Prizes:
Top Prize: $2,500 for the winning team
Runner up team: $1,000
Honorary mention team: $500
Challenge 1 is to overcome challenges of “sensitivity reviews” that archivists undertake in many jurisdictions before they can provide public access to materials that could contain personal information. Overall, while there is ongoing research into automated tools for managing sensitivity reviews of archival documents, these techniques are still evolving and face significant challenges. The approach to be used in this challenge involves treating all materials held in archival corpora as confidential and using novel privacy-preserving AI to defend against accidental or deliberate data leakage.
Challenge 2 is to combine a technique increasingly used by researchers in the Digital Humanities called “Distant Reading” (Moretti, 2013) with Decentralized Privacy Preserving Machine Learning and other privacy-enhancing techniques to protect personal information from exposure whilst providing researchers with new capabilities to tell insightful “data stories”. Distant Reading uses Text Mining, Natural Language Processing, and AI to help researchers analyze large archival corpora. Typically, the output of such analyses is a visualization that represents broad patterns that can be gleaned from archival records, such as the communication patterns between geolocations, public sentiment over time, or topics or themes represented in a corpus of archival text. Using Distant Reading enables researchers to learn from large archival corpora without having to inspect and analyze each individual document, which, in turn, relieves archivists of the burden of undertaking sensitivity reviews before providing public access to their holdings.
Challenge 3 recognizes that Distant Reading alone will not prevent data leakage on its own, however, unless the algorithms used are privacy preserving. This calls for a solution that combines Distant Reading with the use of Privacy Preserving Federated Machine Learning - a collaborative learning method wherein multiple parties train a model without centralizing their data or exposing it to other parties (Chen et al, 2021). Decentralization of the Privacy Preserving Federated Machine Learning has the additional advantage of protecting models against attacks on their integrity and single points of failure. The goal will be to design a fully decentralized protocol for AI in the ClioX ecosystem at both the edge and the aggregation layers.
Challenge 4 aims at transformative innovation in the cultural sector. ClioX aims to harness next-generation technologies for the decentralized internet, enabling mass coordination and empowering archives and researchers to collaborate toward shared goals—supporting cultural collectives, charitable organizations, mutual aid networks, and advocacy groups. It will promote decentralized and equitable revenue distribution for archives and the cultural sector, while facilitating greater and fairer access to archival materials. In this challenge, the aim will be to develop protocols for decentralized governance of the ClioX ecosystem.
Participants can address one or more of the challenges in their solution. For each challenge, Blockathon participants may develop a simple demo, such as a wireframe, user interface, novel smart contract, or algorithm, to illustrate their solution. Mentors will be available throughout the Blockathon to work with teams on the design and, to the extent possible, implementation of their solution.