
Semua Bisa Kena: Grok AI
Kami menyempatkan untuk mengumpulkan respons dari penyintas KBGO Grok dan ahli dalam bidang AI.
Words by Whiteboard Journal
Words: Vanya Shizuka, Alvin Anis, Alana Aishah
Illustration: M. Ilham (Acho)
[Trigger warning: This article may be distressing or triggering to some readers.]
Selama ini kita selalu diminta untuk menghindar. Apa yang harus disembunyikan, siapa yang harus diblokir, konten mana yang sebaiknya tidak diunggah. Seolah-olah tanggung jawabnya ada di tangan orang yang diserang.
Kekerasan digital sering dibicarakan seperti sesuatu yang terjadi dengan sendirinya, seolah-olah itu konsekuensi alami dari internet. Padahal selalu ada titik awal yang sangat sederhana, seseorang memilih untuk mengetik sesuatu, memilih untuk mengunggah dan memilih untuk menyerang.
Lalu terbitlah kebijakan mengenai pemblokiran sementara akses Grok di Indonesia, yang dilakukan untuk mencegah manipulasi gambar bernada seksual tanpa konsen. Sayangnya, fenomena KBGO ini bukanlah hal baru: Dalam CATAHU 2025 Komnas Perempuan, terdata bahwa laporan KBGO meningkat sebesar 40,8 persen dari 2024. Sementara, SAFEnet mencatat sebanyak 605 insiden KBGO melalui pemantauan digital pada triwulan III 2025.
Dan sekarang, di tahun 2026 yang baru berjalan beberapa hari ini, kita disaksikan sendiri betapa masifnya penyalahgunaan Grok untuk memproduksi konten bernuansa seksual tanpa konsen.
Banyak pengguna X sudah mencoba berbagai cara untuk melindungi diri dari penyalahgunaan Grok terhadap konten pribadi mereka, namun selama pelakunya masih bebas melakukan hal yang sama, kekerasan itu sendiri tetap masi berjalan seperti biasa.
Digital spaces are not the enemy. What keeps breaking them is the permission given to people who use them to harass, exploit, and harm. Yang perlu dipertanyakan justru siapa yang merasa bebas melakukan kekerasan di dalamnya, dan kenapa mereka selalu lolos tanpa konsekuensi. Tanpa itu, ruang digital akan terus terasa aman hanya untuk sebagian orang.
Untuk mempertajam pemahaman kita akan penyalahgunaan akal imitasi generatif ini, kami menyempatkan untuk mengumpulkan respons dari penyintas KBGO Grok dan ahli dalam bidang AI.
Antonia Timmerman
Journalist
How do you feel about the inappropriate use of AI towards people on the internet, especially women and young girls?
Outraged, but not surprised.
What part of this experience felt most violating: the image itself, the loss of control, or knowing strangers could generate it on demand?
All of the above: the image, the loss of control, and knowing that anyone can do it to anyone.
Another repulsive thing for me is seeing victim-blaming comments, such as questioning why they posted selfies in the first place.
What truly disappoints me to the core is knowing that it could have been prevented from the DESIGN PHASE: Grok developers could have placed the guardrails to prevent misuses like this BEFORE the feature was released to the public. They could have made it impossible to request stripping people’s clothes, for instance. From the moment AI image-generating features were launched a few years ago, it was already very apparent to women and girls that these tools could be used against them. Nobody did anything, even as cases exactly like we feared appeared here and there. Now, it happened in a much larger scale, across many countries. If there is no real and serious intervention, this will become normalized.
But as we know, in today’s development of our technologies, women and girls’ safety is always an afterthought. In our world today, the responsibility of anticipating negative impacts from a new technology is almost always pushed to the individual or user, and is very rarely expected from the designers, developers, and owners of such technology themselves.
It is true that bad actors will always exist in any kind of circumstances or spaces, and will always find ways to use any kind of tool for malice. But, as designers and creators, it is part of their job to THINK and ANTICIPATE against these potential malicious actors, and do the best they can to put up safety measures, to prevent their tools to be misused, especially when it has been very, very obvious from the start that such AI features could be used to sexually assault women online.
The public must understand that this is part of the creators’ responsibility and they cannot evade their responsibilities.
What are the potential dangers of AI towards hijacking personal identity and the invasion of one’s privacy?
There are many use cases for malicious actors, other than getting sexually harassed and assaulted, they can also use your images to create a fake identity and impersonate you to get through to your other personal data, such as credit cards or bank accounts.
They can also make up fake stories about you — maybe some true and some false — and damage you or your company’s reputation.
But not only scammers and “bad people” have the potential to breach your privacy. Some things are more subtle, but can be as damaging to our society.
Companies and governments could (and actually have been) doing that. For example if a company knows enough about you, where you shop, where you go in the morning, what you do, what you need, what you care about, etc, then it will be easier for them to push content on your feeds that 100% sure will grab your attention: it could be the latest products for you to shop, or it could be content to shape your political perspectives and views on certain things. They will have the power to shape what you see on your phone, your reality, your opinions. Most people overestimate their own ability to wade through AI-optimized social media (that we know as algorithm) without being steered or influenced. This has had deep and wide-ranging impacts on our politics in the past two decades, and will likely be exacerbated in the era of generative AI.
How do you measure harm in AI systems when the damage is psychological, social, or reputational rather than physical?
In this specific case of sexual violence enabled by AI (specifically Grok), indeed the harm inflicted is psychological, social, and reputational rather than physical. Combined with victim blaming, it can have a deep lasting impact on the victims.
Regarding harm in AI systems more generally — I think researchers have only just begun to develop all kinds of frameworks to track and map damages and risks.
Some are keeping track of incidents that pop up and classify them based on things like type, severity, etc. Some are looking at the AI systems from the technical perspective; how they perform and if they perform as intended, what are their limitations, etc.
Right now (other than the sexual violence), we have also seen mass layoffs and narrowing employment opportunities for our youths, simply because a handful tech giants in the world say that their AIs can help companies be more efficient. Teenagers commit suicides because their AIs influenced them. Students are no longer reading books because of the convenience of AI and other digital technologies.
Most people would dismiss these harms as “the price” to pay to gain benefits from technological advancement, and so some people would say things like “suck it up”, or “adapt”, so that we as individuals and as collectives would be willing to give up our rights to get SAFE and high-quality technologies, and would just shut up when we are asked to bear the risks/harms for so-called “progress”. This is a paradigm that must be changed. Of course, it is true that as humans we always adapt whenever we need to survive. But using this wonderful human trait as a cover to constantly exploit one another while producing societal-level damages must be unacceptable.
We deserve technologies designed by people and institutions who have our safety and well-being (as you mentioned, well-being in all aspects including psychological, social, reputational, health, and others) as their top priority. Right now, our technologies are being developed with market domination and monopoly as main drivers. This is extremely harmful to our society.
If engineers of Grok and policy makers were in the room with you, what is the one thing you would want them to understand about the real-world impact of this?
I am pretty sure the engineers (or at least SOMEONE in their office) are well-aware of the real-world impact of their tool. So are many policy makers (of real-world impact of other AI tools in general also). Unfortunately the way these powerful people view technology development is this: 1) harms are the price we pay to gain benefit for technological progress (inevitable); therefore too many regulation/thinking in the design phase before public release will hinder progress. This is something that we just have to endure… 2) AI is a kind of a development that everybody is now “racing to win”, so the focus now is on this race and not on risk mitigation
Therefore, in my opinion, the real-world negative impacts happen not because these people who can make a difference do not understand the impacts — it’s because they don’t want to slow down and build it the proper way for fear that “somebody else” will “win the AI race” and become the most powerful company and/or country. When everybody thinks this way, then this is the kind of technology we get.
Since there are no concrete laws within AI yet, what type of policies would you propose to prevent AI from violating people’s privacy?
As far as I know we already have some existing laws to protect people’s privacy online, including against AI-enabled violations. Some experts might even say that we don’t actually need special AI laws, and that strengthening existing laws on digital privacy might be more useful to fight against AI harms.
But I might be wrong. As I am not an expert in law, I suggest asking Ramzy from SafeNET his opinion on this, and also to point to what exactly existing regulations that can be used right now to our benefit.
What would accountability look like to you: prevention, public apology, stronger limits, legal consequences, or something else?
Sure public apology or some kind of acknowledgement would be nice, but what’s crucial I think are: 1) legal consequences for the perpetrators of the indecent Grok images 2) legal consequences for the company behind Grok and 3) suspension until Grok has installed the proper safety guardrails to protect women and girls, and to protect users in general from abuse.
What would be the best advice for us in the digital world amidst the development of AI?
First, at the individual level, there is no harm in stepping up our privacy protection game. If you dont need social media for work, consider locking your profiles, or reduce posting selfies.
As consumers, we can play a part by pushing tech companies to do their job to design their products thoughtfully and carefully, with our safety and well-being as priority.
And then, as citizens, we have to use our voice to push the government to do their job and protect us against greedy tech companies who are rolling out dangerous products in the pretext of “progress”. We must ask for legal consequences for companies who violate our rights. We must ask for regulation to ensure that technological developments can adhere to the public’s interests in the DESIGN PHASE, not only after they are out in the world.
We must ask that technology be developed according to rule of law, and not law of the jungle.
Aneira Khansa A.
Beauty Content Creator
How do you feel about the inappropriate use of AI towards people on the internet, especially women and young girls?
I feel angry and very frustrated. Karena aku dan temen-temenku udah lakuin segala hal biar Grok gak alter foto kita tapi ternyata tetep aja kena. It’s very disheartening to know that so many women face the same issue.
What part of this experience felt most violating: the image itself, the loss of control, or knowing strangers could generate it on demand?
In a way, I think everything feels very violating. Women should be able to express themselves freely without worrying about these things. At first, I didn’t even think that this was possible. Even kalau perempuan mau pakai baju yang terbuka pun itu hak kita, dan yang paling penting dengan consent kita. The fact that people can edit women’s pictures into something very sexual without consent feels very violating towards our morals.
What are the potential dangers of AI towards hijacking personal identity and the invasion of one’s privacy?
Ini bisa menimbulkan banyak fitnah. Gak sekai dua kali aku melihat perempuan yang diedit seperti ini, even perempuan dengan hijab dan baju tertutup. Ketakutanku cuma ini bisa menjadi alat untuk mempermalukan korban…dan identitas yang selama ini korban punya & bangun malah jadi berantakan karena foto AI tsb. Takutnya untuk orang2 yang belum paham AI atau technology, foto yang digenerate disangka foto asli. This is VERY wrong on so many levels.
How do you measure harm in AI systems when the damage is psychological, social, or reputational rather than physical?
It’s definitely very harmful. Psychologically it hurts so many women. Even seeing other women having to face this hurts me. It feels like we have little to no freedom. Socially juga ini sangat harmful karena banyak perempuan yang sampai tutup akun X karena takut fotonya akan diedit. Menurutku ini juga sebuah pembungkaman atas kebebasan berekspresi.
If engineers of Grok and policy makers were in the room with you, what is the one thing you would want them to understand about the real-world impact of this?
I want them to know that there are limits to some things. If you’re creating a technology this advanced, sudah seharusnya teknologi tsb punya pedoman moral atau limit dari awal. Seharusnya AI yang dibuat bisa menolak prompt yang tidak masuk akal atau harmful, apalagi sesuatu yang arahnya ke pornografi. Actually I think it’s best to close it down for now, karena kita sudah coba berbagai cara untuk menghindari interaksi Grok tapi tidak bisa juga dihindari.
Since there are no concrete laws within AI yet, what type of policies would you propose to prevent AI from violating people’s privacy?
Anything harmful towards people, but right now I think it’s pornography, karena sedang ada banyak perempuan yang kena.
What would accountability look like to you: prevention, public apology, stronger limits, legal consequences, or something else?
For prevention, I think bisa diberatkan ke pembuat AI ya, jadi bukan ke korban. Lalu untuk pelaku yang kasih prompt harmful, meurutku legal action should be done. Harus ada konsekuensi atas perlakuan yang dia lakukan.
What would be the best advice for us in the digital world amidst the development of AI?
Humans have morals. Don’t generate something stupid and harmful. It’s very frustating to see so many people getting too comfortable generating other’s pictures into something they didn’t consent to. In the end, it shows what kind of human you are based on this.
Sheilla Njoto
AI & Gender Sociologist
How do you feel about the inappropriate use of AI towards people on the internet, especially women and young girls?
I don’t see this as “misuse” in the accidental sense. What we are witnessing is the predictable outcome of systems built without prioritising harm prevention for those already most vulnerable online. The key word here is predictable. These harms were not unforeseen or accidental. Ethicists, sociologists, and civil society groups have warned about them long before these systems went public. What we’re seeing is not a failure of imagination, but rather sheer ignorance or a deliberate disregard.
It ultimately reveals whose bodies and identities the internet still treats as public property. Women and young girls have long been the primary targets of digital harassment, sexualisation, and surveillance. AI didn’t invent this violence; it automated it, scaled it, and made it frictionless.
When a system can generate sexualised images of women or minors on demand, the problem isn’t merely “bad actors.” It’s that the technology was built in a world that already normalised the idea that some people’s dignity is negotiable. The risks were known, named, and documented, yet deployment went ahead because the harms were deemed acceptable collateral. In other words, the suffering was priced in. When the cost of harm is treated as worth the speed, scale, or market advantage, the problem is no longer technical — it is ethical and political.
AI simply removes the last layer of friction. And when harm becomes effortless, it becomes ambient — something you’re expected to live with rather than fight. What troubles me most is how quickly these violations are dismissed as edge cases or user behaviour problems. That framing absolves platforms and developers of responsibility. When a system repeatedly enables the sexualisation or violation of women and minors, that is not “neutral technology” work like most would compare with “knife as a tool.” It’s rather a structural failure of design, governance, and values.
What part of this experience felt most violating: the image itself, the loss of control, or knowing strangers could generate it on demand?
I think the most violating aspect is the loss of agency: the realisation that one’s likeness, body, or identity can be replicated, manipulated, and circulated without consent, at scale, and indefinitely. The image is harmful, yes. But the deeper harm lies in knowing that control over one’s own representation has been permanently compromised.
This shifts the psychological ground beneath a person. You’re not only managing a single incident, but also the knowledge that it can be reproduced endlessly by strangers, without your presence, participation, or ability to intervene. The deepest violation is temporal: the sense that the harm no longer lives in a moment, but in the future.
The image may never even be seen, but the knowledge that it could be generated at any time permanently alters how you exist in the world.
What are the potential dangers of AI towards hijacking personal identity and the invasion of one’s privacy?
AI introduces a new phase of identity theft (one that is not about stealing credentials, but about appropriating personhood). Faces, voices, styles, and behaviours can now be simulated with enough accuracy to blur the line between representation and impersonation. And we’re not just talking about “defamation” or “harrassment” because it’s about our identity becoming reproducible data, and therefore privacy stops being a right and becomes a technical vulnerability. So the danger is rather ontological: when systems can convincingly perform “you” without your participation, the boundary between self and representation collapses. And when that boundary collapses, accountability becomes impossible — because no one knows where responsibility begins or ends!
How do you measure harm in AI systems when the damage is psychological, social, or reputational rather than physical?
I think we’re still trapped in a legal and technical mindset that only recognises harm when it is visible, physical, or economically quantifiable. But psychological and social harms are cumulative and often more enduring. In this case, harm should be measured through impact trajectories, instead of just isolated incidents. If people change how they speak, appear, or participate online because they fear AI misuse, that is measurable harm. Silence, disappearance, and self-censorship are the outcomes.
If engineers of Grok and policy makers were in the room with you, what is the one thing you would want them to understand about the real-world impact of this?
I would want them to understand that harm is not an externality. I would want them to understand that harm does not begin when something goes “wrong.” It begins when something works exactly as intended, and that intent is sheer ignorance.
Harm is a design outcome. When a system is designed to be fast, unfiltered, and culturally reactive, the question is not whether it will be misused, but who will be most exposed when it is. And most of the time, the platforms (or founders of those platforms) remain invincible. They let people focus on blaming only the individual actors. But you cannot separate technical ambition from moral consequence. If your system depends on later moderation to fix foreseeable harm, then harm is not just a side effect — but rather the overall operating model.
Since there are no concrete laws within AI yet, what type of policies would you propose to prevent AI from violating people’s privacy?
At minimum, we need consent-based data use, enforceable identity protection, and clear liability chains. Platforms should be legally responsible for foreseeable harms enabled by their systems, not just users protecting each other. I would also argue for mandatory risk assessments before deployment, especially for generative systems capable of producing realistic human likenesses. And crucially, individuals should have the right to opt out of being generatable, not just scraped.
Policies today have to recognise that in a generative world, absence is no longer protection. The burden cannot be placed on individuals to constantly defend themselves against technologies they never agreed to enter. Regulation should intervene upstream — before harm is made scalable.
What would accountability look like to you: prevention, public apology, stronger limits, legal consequences, or something else?
Accountability starts with prevention. But accountability also means constraint and that’s why tech companies hate it. Accountability means slowing down, narrowing capabilities, and accepting that not everything that can be built should be released. It would also mean acknowledging that reputational damage to companies is not equivalent to psychological damage to people. Legal consequences matter, but so does cultural accountability: naming harm clearly, without euphemism, and refusing to normalise it as the cost of progress.
What would be the best advice for us in the digital world amidst the development of AI?
Don’t confuse adaptation with agency. Don’t let inevitability replace imagination. The most dangerous idea in tech is that “this is just how things are now.”
Just because a tool is widespread doesn’t mean its trajectory is inevitable. We still have agency. But that agency should be practiced collectively, not just individually. We need to resist the narrative that questioning AI is anti-progress. And that’s why the most important skill right now isn’t technical literacy alone but ethical literacy. Because what is progress if it requires silence, fear, and resignation? Isn’t that coercion?



