Iterations

Documenting ideas, reflections, and research-in-progress.
All
Higher Education
Career Growth
Ruminating
Public Research
Personal
Work
NCERT = Deemed University?
At first glance, NCERT being granted Deemed-to-be University status looks like a routine bureaucratic upgrade. But structurally, this is a significant shift one that blurs long-standing boundaries between curriculum authority, teacher training, research, and degree-granting power. The implications are deep, and they raise uncomfortable but necessary questions. NCERT as a Degree-Awarding Competitor As a Deemed University, NCERT will gain autonomy over curriculum design, admissions, and fee structures. This opens the door for NCERT to launch its own UG, PG, and PhD programs particularly in education, curriculum studies, assessment, and pedagogy. If these programs emerge, they could quickly become the gold standard for teacher recruitment in both public and private schools. Degrees like B.Ed or M.Ed from state or private universities may begin to look secondary, not because they lack rigour, but because NCERT's institutional authority will now carry unmatched symbolic weight. The central tension here is conflict of interest. If NCERT becomes an active competitor in the degree market, how does it continue to function as a neutral national curriculum advisor? A body that both sets standards and sells credentials aligned to those standards risks privileging its own academic models subtly or otherwise. It’s possible this sets up a de facto hierarchy, where NCERT degrees come to be seen as the default benchmark, influencing hiring practices across the sector. Research University Status and the NEP Alignment Question Deemed status will allow NCERT to function as a full-fledged research university, aligning neatly with NEP 2020's emphasis on integrated teacher education and research-led pedagogy. One likely development I see happening is the expansion of Integrated Teacher Education Programs, where students spend four or five formative years entirely within NCERT-designed frameworks. This could produce teachers deeply aligned with national curricular goals from day one efficient, consistent, and policy-compliant. But this efficiency comes at a cost. If the same institution writes the textbooks, trains the teachers, and conducts the research validating those textbooks, the space for independent academic critique narrows. Educational research thrives on contestation, not consensus. Over-centralization risks turning pedagogy into policy compliance rather than an evolving intellectual field. Standardisation improves alignment certainly, but it also risks flattening critical diversity into compliance. Global Reach and Soft Power Expansion NCERT already influences curricula beyond India, especially in developing countries that adopt NCERT-style frameworks. University status gives it parity with international higher education institutions, enabling formal collaborations, joint degrees, and credit-based exchanges. We may see NCERT offering online certifications, micro-credentials, or modular courses for international educators interested in the "Indian model" of schooling. This positions NCERT as a soft-power educational exporter, especially in the Global South. This could elevate India's educational diplomacy but also commodify pedagogy in ways that prioritize exportability over contextual nuance, with NCERT becoming an international brand and not just an national instituion. Fees, Funding, and the Quiet Shift Toward Monetization While NCERT is a government-funded body, Deemed University status often comes with expectations of financial self-sufficiency. Over time, this may mean reduced reliance on government grants.
  1. Higher Education
  • A
    Akshaya R
I see how you think...
I spend a lot of time looking at other people’s thinking. Not in a lofty way but a very ordinary way. Reading what someone has written at the end of a long day. Trying to understand what they meant, not just what they managed to say. Grading, over time, becomes less about correctness and more about interpretation. You begin to recognise effort, confusion, confidence, and uncertainty even when they’re imperfectly expressed. AI entered this space quietly. It didn’t arrive with a big announcement. It just started showing up alongside the work, offering suggestions, scores, drafts of feedback. At first, it felt helpful. A second set of eyes. A way to move faster through volume. And in many ways, it is helpful. But sitting with it, submission after submission, something else starts to surface. One evening, I remember reading an answer that was technically messy. The structure was off. The language was uneven. But the reasoning was there. You could see the learner circling the right idea, almost touching it, then pulling back. It wasn’t elegant, but it was honest thinking. The AI score was lower than I expected. Not wildly wrong, just… unkind in a quiet way.I reread the answer. I slowed down. I adjusted the score. That moment stays with me because it reminded me that understanding often shows up before fluency. And fluency is easier to reward than understanding. I’ve noticed this pattern repeatedly. Answers that are polished, well-organised, and confident tend to move smoothly through the system. They look like what a “good answer” is supposed to look like. Meanwhile, shorter, rougher responses sometimes get flattened, even when the idea underneath is sound.
  1. Work
  • A
    Akshaya R
When Authority Can Be Simulated, Alerts Fail
The playbook is remarkably consistent. You receive a call from someone claiming to be law enforcement, informing you that your identity is linked to a serious crime. Within minutes, you are transferred sometimes to a video call with other "officials" who speak in dense legal language, reference case numbers, and display the symbols you associate with the justice system. Nothing feels obviously fake; everything feels procedural. Instructions are given on not to leave your home, not to speak to anyone else, and not to disconnect the phone or camera while the "investigation" is underway. Written instructions follow. Movements are restricted. Everyday activities are observed. The calls continue for hours, sometimes days, creating a closed loop where fear, authority, and compliance reinforce one another. Eventually, a resolution is offered. A financial transfer in lieu of arrest or prolonged legal proceedings. Exhausted and isolated, they comply. Once the transfer is complete, the calls stop. The officials disappear. The money does not return. They have gotten wiser to the tricks as well. Money is now funneled via mule accounts and the account holders are given lavish holidays, five star stays and first class tickets. Nothing was hacked. No systems failed. The scam worked because authority itself was convincingly simulated and judgment collapsed under pressure. This failure is becoming more common, not because people are getting less careful, but because the signals people have historically relied on official language, urgency, familiarity, institutional tone are now easy to imitate. Tools that generate fluent text, mimic voices, or replicate bureaucratic phrasing don't need to be perfect to be effective; they only need to sound plausible enough at the moment a decision is made. In practice, this shifts the burden of verification onto individuals in environments they were never trained to navigate. Most existing defenses operate too late. Technical systems are designed to protect infrastructure, not judgment. Legal frameworks punish wrongdoing after harm has occurred. Awareness campaigns warn people to "be careful" or "don't click links," but don't train what to do when a message feels urgent, authoritative, and ambiguous. Alerts are often triggered after trust has already been granted, or they appear so frequently that people learn to ignore them. In these moments, the failure isn't a lack of information it's a lack of practiced judgment. I think this points to a missing category. We treat fraud as a technical or compliance problem, but it is equally a human decision problem. The critical vulnerability is the moment where someone must decide whether to comply, verify, delay, or refuse. That moment is shaped by habit, intuition, and past experience not by policy documents or warning banners. The claim I want to test is simple but non-obvious judgment under simulated authority may be trainable, but only through experience rather than instruction. Just as pilots train for emergencies through simulation rather than reading manuals, people may need structured, safe exposure to realistic scam scenarios in order to recognize them under pressure. The goal isn't paranoia or blanket distrust, but calibration on knowing when to pause, when to verify, and when something that sounds official isn't. This matters because the cost of getting it wrong is rising, and the margin for error is shrinking. As deception becomes cheaper and more convincing, relying on alerts and after-fact remedies will continue to fail at the exact point where trust is decided. If there is a way to strengthen judgment before harm occurs, it would represent a new layer of defense one that complements technical and legal systems rather than replacing them. I think this is an idea worth investigating or studying a bit more as there is an undersupply of people and projects willing to sit with uncertainty at the intersection of technology, institutions, and human judgment. In particular, I think we underestimate three things: First, many important questions especially around trust, fraud, and decision-making are genuinely hard to answer. Evidence is often mixed, context-dependent, and messy, and quick conclusions are usually overconfident. Second, experts and institutions are often wrong in practice, even when they are right in theory. We have good evidence of this. At the same time, dismissing expertise entirely is a serious overcorrection that usually leaves people worse off, not better. Third, despite these difficulties, it is still possible to move closer to the truth but only if we're willing to slow down, test ideas in the real world, and accept outcomes that are ambiguous or uncomfortable. My project is an attempt to do exactly that. Rather than assuming we know how people should behave in the face of technology-enabled deception, I want to find out what actually helps them make better decisions under pressure, and where those interventions fail. Even negative results would be valuable, because they would clarify the limits of education and judgment in a space where policy and technology are currently guessing. This isn't a proposal for a product or a solution. It's a statement of the problem I'm trying to understand when authority can be convincingly simulated, alerts fail, and judgment becomes the last line of defense. Whether that judgment can be reliably trained is an open question but it's one I think is worth thinking on testing carefully. What I am currently working on is: I have designed a 20–35 minute training unit focused on digital arrest scams. The outline is rather basic for now, I walk them through what is the scam, how they make contact with the potential victim, the playbook they employ, any key words I have noticed etc. For the key words and some other details on how they reach out to victims, I reached out via a connection to an official at Pune Cyber Cell to see if they are willing to talk to me on this. Any information they are willing to share will be used in my sessions and improving the session playbook. I have also cold contacted 5-6 journalists who have investigated this topic to see what should my sessions cover or if there is any other public interest angle I am missing out on. No responses as of yet, I suppose I should expect some this week once the holiday season settles down.
  1. Public Research
  • A
    Akshaya R
Woke up too late
When a family member is terminally ill, you cannot see past that. Once my father passed away and I had to slowly move on, I noticed others had already moved on and moved up ages ago. Roles, promotions, salaries all seem fixed in retrospect. I had to just watch people rising up while I stay where I am. When Scale Optimizes Everything Except the People On paper, my role is Academic Delivery. In reality, my work spans grading oversight, learner follow-ups, course management, SLA enforcement, escalation handling, and cost analysis including assessing whether grading itself can be reduced or eliminated through structural and AI-led shifts. That gap between title and reality is not accidental. It is a feature of the system. Over the last few quarters, academic grading volume has scaled dramatically. We are now grading about close to well above 200,000 learner submissions and are trending toward nearly a million annually. The growth is global, consistent, and predictable. From a business standpoint, this is success. Programs are scaling. Demand exists. Revenue holds. But scale doesn't simply increase volume. It reshapes work. What begins as academic evaluation slowly becomes operational throughput. Pedagogical judgment is replaced with turnaround time. Learning quality is translated into metrics. And once that translation happens, optimization follows. The Efficiency Story (and Why It's Convincing) To its credit, the system responded rationally. Grading costs were reduced through a series of structural shifts: Centralizing grading away from course leaders Moving from full-time graders to part-time consultants
  1. Career Growth
  • A
    Akshaya R