Stranas KA: National Strategy for Artificial Intelligence Governance in Indonesia
Stranas KA, stands for Strategi Nasional Kecerdasan Artifisial which literally translates to "National Artificial Intelligence Strategy" rules out the utilisation of artificial intelligence in Indonesia. The primary foundational principle outlined in this document is that artificial intelligence policies must be oriented toward benefiting humanity. In the document, it is revealed that trusted artificial intelligence can be realised if the following requirements are met. The following are the strategic issues related to AI (artificial intelligence)
Human as Supervisor
- Supervision can be achieved through governance mechanisms such as the following approaches:
- Human-in-the-loop (HITL): This refers to the ability of humans to intervene at every decision cycle of the artificial intelligence system.
- Human-on-the-loop (HOTL): HOTL refers to the ability of humans to intervene during the system design cycle and monitor system operations.
- Human-in-command (HIC): Human-in-Command (HIC) HIC refers to the capability to oversee all activities of the Artificial Intelligence system, including monitoring its broader economic, social, legal, and ethical impacts, and determining when and how to use the system in specific situations.
Formulated Based on Pancasila Values
The development of Artificial Intelligence must be grounded in Pancasila, the foundational ideology of the nation. The principles of a Pancasila-based legal state encompass a social order that respects divinity, humanity, nationalism, democracy, and the welfare of the people. This perspective emphasizes a unique family-oriented approach, prioritizing the common good while upholding the dignity and respect of each individual, rather than an individualistic viewpoint.
Reliable, Safe, and Transparent, Accountable
Artificial intelligence aimed at fostering public trust and accountability must meet the criteria of safety, reliability, and transparency. "Safe" implies that AI systems should be thoroughly tested, suitable for use, and designed not to threaten human safety or compromise human rights. "Transparent" means that AI development should be open to scrutiny by the government and the public to ensure that it is safe and trustworthy; in other words, there should be transparency from developers regarding the AI’s development process, making the system accountable to its creators. The AI system should also have consistent accessibility or a minimum service level agreement.