Iterations

Documenting ideas, reflections, and research-in-progress.
All
Higher Education
Career Growth
Ruminating
Public Research
Personal
Work
Policy
The Kill Switch Fallacy
In recent months, India has woken up to the scale of the so-called "digital arrest" epidemic. According to official estimates, scammers have defrauded citizens of nearly ₹3,000 crore by impersonating police officers, regulators, and enforcement agencies often keeping victims on video calls for hours, threatening arrest, and coercing them into transferring money "for verification." In response, the Ministry of Home Affairs (MHA), along with the RBI, is evaluating two headline interventions: an emergency Kill Switch inside banking apps, and a form of fraud insurance or shared risk pool across banks. On paper, both sound sensible. In practice, they rest on a fragile assumption: that victims of digital arrests are still acting with agency. My concern is that these tools may arrive after trust has already collapsed. And once trust fails, buttons and insurance clauses don't help much. The Policy Response on the Table Let's start with what the government is actually proposing. The idea is to embed a Kill Switch an emergency button inside banking and UPI apps. If a user suspects fraud or realizes they've been targeted they can hit the Kill Switch to instantly freeze outgoing transactions and banking access. The goal is to stop the "layering" process, where stolen funds are rapidly split across dozens of mule accounts and disappear within minutes. Technically, this is elegant. Behaviorally, it's far less convincing. In a digital arrest scam, victims are often kept in a state of intense fear. They are told they are being monitored. They are warned not to hang up, not to touch other apps, not to "tamper with evidence." In that psychological state, expecting someone to calmly locate and press a Kill Switch assumes clarity that simply does not exist. In one of my recent training sessions with seniors, several participants said something strikingly similar: "If someone says they are police, my first instinct is to obey, not to experiment." That instinct is exactly what scammers exploit. A Kill Switch assumes the victim realizes they are under attack before compliance. Digital arrests work precisely because that realization comes too late. The second proposal is more structural. The RBI and MHA are reportedly exploring a pooled insurance mechanism similar to terrorism insurance where banks and insurers jointly absorb losses from digital fraud. This is a significant shift. Until now, most cyber insurance excludes first-party fraud, where the victim authorizes the transaction themselves under manipulation. That exclusion is increasingly untenable. But insurance introduces its own risks. There is the obvious moral hazard problem. If customers believe losses will be reimbursed, vigilance may drop. Less visibly, if banks know a shared pool will absorb losses, incentives to invest aggressively in prevention weaken. More importantly, fraud insurance raises messy questions no policy brief answers cleanly: What proof is required? An FIR? A bank investigation? Who decides if someone was genuinely manipulated or merely careless? In India's already slow grievance redressal system, this could easily turn into years of litigation. Scammers will adapt There is another assumption quietly embedded in the Kill Switch proposal that deserves scrutiny: that scammers will treat it as a deterrent rather than a design constraint. In reality, scammers are not opportunists reacting in real time. They are professional friction bypassers. Every new safeguard introduced into the system becomes, for them, just another obstacle to engineer around.
  1. Policy
  • A
    Akshaya R
Notes From My First Two Sessions
For the past few weeks, I was working on a very small, very specific question: Can ordinary people get better at resisting digital arrest scams after just 20–35 minutes of focused training without becoming more afraid of everything? I'm not building software but I like to think I'm running tiny experiments in judgment. The scam I'm focusing on is the so-called "digital arrest" scam. You can read more about the idea here. A caller claims to be from the police, a government department, or a courier service, and over the next hour or two, walks the victim into a state of panic so deep that they transfer large sums of money just to make the fear go away. A close family member of mine fell victim to this and lost a significant amount of money. That's the personal reason. The intellectual reason is that this scam preys on fear with disturbing precision. It is a live example of what I've been thinking about for months: simulated authority that feels real enough to override hesitation. In this post, I want to share what I actually did in my first three training sessions, how I measured changes in judgment, and what I learned from a very small group of people. What I'm Trying to Test My hypothesis is simple: With a short, structured, realistic training unit, people can improve their ability to spot digital arrest scams and calibrate their confidence becoming more accurate without becoming more paranoid. To make that testable, I narrowed it down to three concrete questions: Can people better distinguish between scam and non-scam messages after training? Does their self-reported confidence move in a healthy direction?(Better calibrated, not simply more doubtful or more cocky.) Do they leave the session feeling more prepared, or simply more frightened? If accuracy goes up but people walk away scared of every phone call, I would treat that as a failure. Right now, the training unit is deliberately basic. Each session ran 20–35 minutes on Google Meet and follows roughly this structure: 1. Baseline Task (5-7 minutes) I show participants a small set of short scenarios: call transcripts, WhatsApp messages, email snippets. Some are based on real digital arrest scams; some are legitimate. For each one, they answer two questions: Is this likely to be a scam? (Options Yes/No/Unsure)
  1. Public Research
  • A
    Akshaya R
What We Learn from the Sounds We Stop Hearing
In my community, I can tell what kind of morning it is without looking outside. The newspaper arrives first with a thwack, followed by the milk delivery, which thumps the jugs against the gate. Then come the maids, opening and closing doors in quick succession, followed by the garbage collectors announcing themselves with short calls and the scrape of bins dragged across concrete. Dogs bark in advance of every arrival. Soon after, delivery bikes and vans appear the brief honk, the impatient rev before pressure washers cut in sharply. These sounds aren't random. They map labour, consumption, and maintenance as clearly as any timetable. Walk through any new development in India and the billboards promise the same thing: Modern Lifestyles, Premium Spaces, International Standards. But close your eyes, and the lifestyle reveals a different texture. It is the high-velocity spray of a neighbour's power wash, the clean electronic ding of induction cooktops, and the monolithic drone of VRF cooling the sound of domestic spaces syncing with the rhythm of the global factory. I am asking a simple question why does contemporary progress in India sound the way it does? The piece explores what I think of as an acoustic transition from the negotiative, human noise of the street to the engineered hum of private interiors and argues that sound is not incidental, but a by-product of how development is currently optimised. Rather than treating noise as cultural excess or regulatory failure, I am looking at it as evidence of design priorities. In this sense, sound becomes a trace of prescriptive knowledge at work what has been learned about how to keep systems running cheaply and continuously under real constraints of density, climate, cost, and scale. Quiet systems require redundancy, insulation, surplus capacity, and time. Loud systems prioritise continuity, precision, and speed. In India's current phase of growth, sound is often the cost of making things work reliably at scale. I am moving through three linked ideas: The Sound of Precision Domestic machines like pressure washers and induction stoves signal a shift from improvisation to specification from "making do" to maintaining spaces exactly as designed. Their noise reflects effort and exactness, not disorder. The Acoustic Fortress New homes are sold as sanctuaries yet rely on systems that push sound outward into shared space. Silence emerges not as a baseline condition, but as an infrastructural achievement unevenly distributed and carefully engineered. Density as a Design Condition Drawing on media and communication theory, I also consider whether India's tolerance for dense, high-decibel physical environments is mirrored in its digital interfaces and information flows, which often favour layering and volume over restraint.
  1. Ruminating
  • A
    Akshaya R
NCERT = Deemed University?
At first glance, NCERT being granted Deemed-to-be University status looks like a routine bureaucratic upgrade. But structurally, this is a significant shift one that blurs long-standing boundaries between curriculum authority, teacher training, research, and degree-granting power. The implications are deep, and they raise uncomfortable but necessary questions. NCERT as a Degree-Awarding Competitor As a Deemed University, NCERT will gain autonomy over curriculum design, admissions, and fee structures. This opens the door for NCERT to launch its own UG, PG, and PhD programs particularly in education, curriculum studies, assessment, and pedagogy. If these programs emerge, they could quickly become the gold standard for teacher recruitment in both public and private schools. Degrees like B.Ed or M.Ed from state or private universities may begin to look secondary, not because they lack rigour, but because NCERT's institutional authority will now carry unmatched symbolic weight. The central tension here is conflict of interest. If NCERT becomes an active competitor in the degree market, how does it continue to function as a neutral national curriculum advisor? A body that both sets standards and sells credentials aligned to those standards risks privileging its own academic models subtly or otherwise. It’s possible this sets up a de facto hierarchy, where NCERT degrees come to be seen as the default benchmark, influencing hiring practices across the sector. Research University Status and the NEP Alignment Question Deemed status will allow NCERT to function as a full-fledged research university, aligning neatly with NEP 2020's emphasis on integrated teacher education and research-led pedagogy. One likely development I see happening is the expansion of Integrated Teacher Education Programs, where students spend four or five formative years entirely within NCERT-designed frameworks. This could produce teachers deeply aligned with national curricular goals from day one efficient, consistent, and policy-compliant. But this efficiency comes at a cost. If the same institution writes the textbooks, trains the teachers, and conducts the research validating those textbooks, the space for independent academic critique narrows. Educational research thrives on contestation, not consensus. Over-centralization risks turning pedagogy into policy compliance rather than an evolving intellectual field. Standardisation improves alignment certainly, but it also risks flattening critical diversity into compliance. Global Reach and Soft Power Expansion NCERT already influences curricula beyond India, especially in developing countries that adopt NCERT-style frameworks. University status gives it parity with international higher education institutions, enabling formal collaborations, joint degrees, and credit-based exchanges. We may see NCERT offering online certifications, micro-credentials, or modular courses for international educators interested in the "Indian model" of schooling. This positions NCERT as a soft-power educational exporter, especially in the Global South. This could elevate India's educational diplomacy but also commodify pedagogy in ways that prioritize exportability over contextual nuance, with NCERT becoming an international brand and not just an national instituion. Fees, Funding, and the Quiet Shift Toward Monetization While NCERT is a government-funded body, Deemed University status often comes with expectations of financial self-sufficiency. Over time, this may mean reduced reliance on government grants.
  1. Higher Education
  • A
    Akshaya R
I see how you think...
I spend a lot of time looking at other people’s thinking. Not in a lofty way but a very ordinary way. Reading what someone has written at the end of a long day. Trying to understand what they meant, not just what they managed to say. Grading, over time, becomes less about correctness and more about interpretation. You begin to recognise effort, confusion, confidence, and uncertainty even when they’re imperfectly expressed. AI entered this space quietly. It didn’t arrive with a big announcement. It just started showing up alongside the work, offering suggestions, scores, drafts of feedback. At first, it felt helpful. A second set of eyes. A way to move faster through volume. And in many ways, it is helpful. But sitting with it, submission after submission, something else starts to surface. One evening, I remember reading an answer that was technically messy. The structure was off. The language was uneven. But the reasoning was there. You could see the learner circling the right idea, almost touching it, then pulling back. It wasn’t elegant, but it was honest thinking. The AI score was lower than I expected. Not wildly wrong, just… unkind in a quiet way.I reread the answer. I slowed down. I adjusted the score. That moment stays with me because it reminded me that understanding often shows up before fluency. And fluency is easier to reward than understanding. I’ve noticed this pattern repeatedly. Answers that are polished, well-organised, and confident tend to move smoothly through the system. They look like what a “good answer” is supposed to look like. Meanwhile, shorter, rougher responses sometimes get flattened, even when the idea underneath is sound.
  1. Work
  • A
    Akshaya R
When Authority Can Be Simulated, Alerts Fail
The playbook is remarkably consistent. You receive a call from someone claiming to be law enforcement, informing you that your identity is linked to a serious crime. Within minutes, you are transferred sometimes to a video call with other "officials" who speak in dense legal language, reference case numbers, and display the symbols you associate with the justice system. Nothing feels obviously fake; everything feels procedural. Instructions are given on not to leave your home, not to speak to anyone else, and not to disconnect the phone or camera while the "investigation" is underway. Written instructions follow. Movements are restricted. Everyday activities are observed. The calls continue for hours, sometimes days, creating a closed loop where fear, authority, and compliance reinforce one another. Eventually, a resolution is offered. A financial transfer in lieu of arrest or prolonged legal proceedings. Exhausted and isolated, they comply. Once the transfer is complete, the calls stop. The officials disappear. The money does not return. They have gotten wiser to the tricks as well. Money is now funneled via mule accounts and the account holders are given lavish holidays, five star stays and first class tickets. Nothing was hacked. No systems failed. The scam worked because authority itself was convincingly simulated and judgment collapsed under pressure. This failure is becoming more common, not because people are getting less careful, but because the signals people have historically relied on official language, urgency, familiarity, institutional tone are now easy to imitate. Tools that generate fluent text, mimic voices, or replicate bureaucratic phrasing don't need to be perfect to be effective; they only need to sound plausible enough at the moment a decision is made. In practice, this shifts the burden of verification onto individuals in environments they were never trained to navigate. Most existing defenses operate too late. Technical systems are designed to protect infrastructure, not judgment. Legal frameworks punish wrongdoing after harm has occurred. Awareness campaigns warn people to "be careful" or "don't click links," but don't train what to do when a message feels urgent, authoritative, and ambiguous. Alerts are often triggered after trust has already been granted, or they appear so frequently that people learn to ignore them. In these moments, the failure isn't a lack of information it's a lack of practiced judgment. I think this points to a missing category. We treat fraud as a technical or compliance problem, but it is equally a human decision problem. The critical vulnerability is the moment where someone must decide whether to comply, verify, delay, or refuse. That moment is shaped by habit, intuition, and past experience not by policy documents or warning banners. The claim I want to test is simple but non-obvious judgment under simulated authority may be trainable, but only through experience rather than instruction. Just as pilots train for emergencies through simulation rather than reading manuals, people may need structured, safe exposure to realistic scam scenarios in order to recognize them under pressure. The goal isn't paranoia or blanket distrust, but calibration on knowing when to pause, when to verify, and when something that sounds official isn't. This matters because the cost of getting it wrong is rising, and the margin for error is shrinking. As deception becomes cheaper and more convincing, relying on alerts and after-fact remedies will continue to fail at the exact point where trust is decided. If there is a way to strengthen judgment before harm occurs, it would represent a new layer of defense one that complements technical and legal systems rather than replacing them. I think this is an idea worth investigating or studying a bit more as there is an undersupply of people and projects willing to sit with uncertainty at the intersection of technology, institutions, and human judgment. In particular, I think we underestimate three things: First, many important questions especially around trust, fraud, and decision-making are genuinely hard to answer. Evidence is often mixed, context-dependent, and messy, and quick conclusions are usually overconfident. Second, experts and institutions are often wrong in practice, even when they are right in theory. We have good evidence of this. At the same time, dismissing expertise entirely is a serious overcorrection that usually leaves people worse off, not better. Third, despite these difficulties, it is still possible to move closer to the truth but only if we're willing to slow down, test ideas in the real world, and accept outcomes that are ambiguous or uncomfortable. My project is an attempt to do exactly that. Rather than assuming we know how people should behave in the face of technology-enabled deception, I want to find out what actually helps them make better decisions under pressure, and where those interventions fail. Even negative results would be valuable, because they would clarify the limits of education and judgment in a space where policy and technology are currently guessing. This isn't a proposal for a product or a solution. It's a statement of the problem I'm trying to understand when authority can be convincingly simulated, alerts fail, and judgment becomes the last line of defense. Whether that judgment can be reliably trained is an open question but it's one I think is worth thinking on testing carefully. What I am currently working on is: I have designed a 20–35 minute training unit focused on digital arrest scams. The outline is rather basic for now, I walk them through what is the scam, how they make contact with the potential victim, the playbook they employ, any key words I have noticed etc. For the key words and some other details on how they reach out to victims, I reached out via a connection to an official at Pune Cyber Cell to see if they are willing to talk to me on this. Any information they are willing to share will be used in my sessions and improving the session playbook. I have also cold contacted 5-6 journalists who have investigated this topic to see what should my sessions cover or if there is any other public interest angle I am missing out on. No responses as of yet, I suppose I should expect some this week once the holiday season settles down.
  1. Public Research
  • A
    Akshaya R
Woke up too late
When a family member is terminally ill, you cannot see past that. Once my father passed away and I had to slowly move on, I noticed others had already moved on and moved up ages ago. Roles, promotions, salaries all seem fixed in retrospect. I had to just watch people rising up while I stay where I am. When Scale Optimizes Everything Except the People On paper, my role is Academic Delivery. In reality, my work spans grading oversight, learner follow-ups, course management, SLA enforcement, escalation handling, and cost analysis including assessing whether grading itself can be reduced or eliminated through structural and AI-led shifts. That gap between title and reality is not accidental. It is a feature of the system. Over the last few quarters, academic grading volume has scaled dramatically. We are now grading about close to well above 200,000 learner submissions and are trending toward nearly a million annually. The growth is global, consistent, and predictable. From a business standpoint, this is success. Programs are scaling. Demand exists. Revenue holds. But scale doesn't simply increase volume. It reshapes work. What begins as academic evaluation slowly becomes operational throughput. Pedagogical judgment is replaced with turnaround time. Learning quality is translated into metrics. And once that translation happens, optimization follows. The Efficiency Story (and Why It's Convincing) To its credit, the system responded rationally. Grading costs were reduced through a series of structural shifts: Centralizing grading away from course leaders Moving from full-time graders to part-time consultants
  1. Career Growth
  • A
    Akshaya R