AIML-500 · Indiana Wesleyan University

Manikanta
Vasana

AI/ML Graduate Student

Machine Learning Fundamentals · Instructor: Marvin Hunt

Building expertise at the intersection of artificial intelligence, data-driven decision making, and human-centered design. Passionate about AI that is not just technically sound but genuinely useful.

view_portfolio() github_profile()
04
artifacts_live
2026
current_year
IWU
institution
AI/ML
focus_area

About Me

I am a Software Engineer and graduate student in Machine Learning Fundamentals at Indiana Wesleyan University, developing skills across the technical and leadership dimensions of AI/ML work. My focus is on understanding not just how AI systems are built, but how they are adopted, led, and sustained in real organizational contexts.

In my professional role, I work on backend systems and enterprise-scale microservice architecture. My current project involves migrating legacy COBOL systems to Java Spring Boot microservices, handling high-volume data pipelines, cloud storage integration, and container orchestration on Kubernetes. This hands-on engineering experience gives me a practical lens that I bring directly into my AI/ML studies.

I bring a cross-functional mindset to AI work bridging the gap between technical development and the people and processes that determine whether AI actually delivers value. I believe that responsible, human-centered AI is not a constraint on innovation it is a condition for it.

Machine LearningFundamentals & Concepts
Generative AIClaude, ChatGPT, Gemini
Prompt EngineeringLLM Interaction Design
Design ThinkingHuman-Centered Approach
Java Spring BootMicroservice Architecture
Backend SystemsAPIs · Cloud · Kubernetes
Change LeadershipAI/ML Integration
Technical WritingMixed Audiences
Bot BuildingChatbase · Mizou
Research & SynthesisAPA · Critical Analysis
// personal_value_proposition

I bridge the gap between machine learning theory and real organizational practice — translating complex AI concepts into decisions that non-technical stakeholders can act on, while maintaining the technical depth that engineering teams respect. As a Software Engineer with hands-on experience in backend systems, microservice architecture, and enterprise-scale data pipelines, combined with graduate-level AI/ML studies, I bring both the engineering credibility and the communication skills that most ML practitioners develop separately, if at all. My goal is to help organizations not just adopt AI, but adopt it responsibly, sustainably, and in ways that actually deliver value to the people it is meant to serve.

Portfolio Artifacts

A curated collection of work from AIML-500, demonstrating research ability, technical skill, and human-centered thinking.

🎯
// target_audience

This portfolio is designed for AI/ML hiring managers, technical recruiters, and team leads evaluating candidates who can operate at the intersection of technical depth and organizational leadership. It is equally relevant to academic collaborators and program faculty assessing applied learning outcomes. Each artifact is framed not just as a course deliverable but as evidence of practical thinking — showing how I approach ambiguous problems, communicate across audiences, and build AI solutions with responsibility and rigor. If you are looking for someone who understands both the algorithm and the impact it has on real people and organizations, this portfolio is for you.

artifact_01
● live

The Evolution of Artificial Intelligence

A Research-Based Historical Timeline · 1950s to Present

This artifact traces the full arc of Artificial Intelligence development from Alan Turing's foundational 1950 paper and the Dartmouth Conference of 1956, through two AI Winters caused by hardware limitations and funding collapses, to the deep learning revolution of 2012 and the emergence of large language models and generative AI in the 2020s.

AI/ML History Research & Synthesis Technical Writing Deep Learning Generative AI APA Citation
Objective

Trace the historical development of AI, identify key milestones, and analyze the forces technical, financial, and organizational that have shaped the field from 1950 to the present generative AI era.

Process

Original research and synthesis across multiple credible academic and industry sources. Independent analysis structured as a chronological timeline with contextual commentary at each milestone.

Tools & Technologies

Research: Academic databases, IWU library
Writing: Microsoft Word, APA 7th ed.
AI Support: Claude (Anthropic)

Unique Value

Goes beyond listing events to explain why the field stalled and accelerated connecting historical patterns to practical decisions AI practitioners face today.

1950
Turing's Question
Alan Turing publishes "Computing Machinery and Intelligence," proposing the Turing Test and asking whether machines can think.
1956
AI is Born Dartmouth Conference
John McCarthy coins the term "Artificial Intelligence." Early optimism runs high researchers predict human-level AI within a decade.
1970s
First AI Winter
Early systems prove too brittle for real-world use. The Lighthill Report triggers funding cuts in the UK and US. Progress stalls.
1980s
Expert Systems & Second Winter
Rule-based expert systems briefly revive interest, but prove expensive and inflexible. A second AI winter follows by the late 1980s.
2012
The AlexNet Moment
Geoffrey Hinton's AlexNet wins ImageNet by a record margin. Nvidia GPU hardware finally enables the deep learning theories that had existed for decades.
2017
Attention Is All You Need
Google researchers publish the Transformer architecture the foundation of GPT, BERT, and all modern large language models.
2022–Present
The Generative AI Era
ChatGPT, Claude, Gemini, and Llama represent a new category of AI generative systems capable of producing novel text, code, and images at scale.
Why This Matters to Employers

Understanding AI history is directly relevant to anyone making decisions about AI adoption, investment, or strategy. Leaders who understand why the field stalled before are better positioned to avoid repeating those conditions. This artifact demonstrates research capability, analytical thinking, and the ability to communicate complex technical history to both technical and non-technical audiences.

// references
  • Copeland, B. J. (2023). Artificial intelligence. Encyclopaedia Britannica. https://www.britannica.com/technology/artificial-intelligence
  • Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. Basic Books.
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
  • McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. Dartmouth College.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. https://arxiv.org/abs/1706.03762
artifact_02
◆ new

WellGuide: Sleep Insights

AI Wellness Chatbot · Design Thinking · Mizou.com (Free Tier)

WellGuide: Sleep Insights is an AI-powered wellness chatbot focused specifically on sleep health designed and deployed on Mizou.com, a purpose-built platform for creating responsible, educational AI chatbots. Built from scratch using a structured design thinking process from empathy mapping through to a working prototype tested with real user interactions. It demonstrates how generative AI can be shaped into a purposeful, human-centered tool with a clear, responsible scope.

bot deployed · mizou.com · free tier · available now try_bot() →
Bot Building Design Thinking Responsible AI Sleep Health Prompt Engineering Mizou.com Generative AI UX Design
Objective

Design and deploy a responsible AI chatbot providing accessible, supportive sleep wellness guidance demonstrating practical application of design thinking and responsible AI principles in a real deployment environment.

Target Audience

Graduate students and early-career professionals navigating academic pressure, work-life balance, and poor sleep patterns users who need accessible support without clinical overhead.

Tools & Technologies

Platform: Mizou.com (AI bot builder)
Method: Stanford d.school Design Thinking
Framework: Prompt engineering & persona design
Docs: Microsoft Word, APA 7th ed.

Relevance

Professionals who can design responsible AI experiences not just deploy them will be in high demand. WellGuide demonstrates exactly that combination of capability and judgment.

🌙
Sleep Check-Ins
Guided prompts helping users reflect on sleep quality, duration, and patterns without clinical framing.
🛡️
Responsible AI Guardrails
Scoped to avoid medical diagnosis. Redirects to professional resources when conversations approach clinical territory.
💬
Conversational Persona
Warm, calm persona designed to feel supportive while maintaining appropriate professional boundaries.
🎯
Sleep Hygiene Tips
Context-aware suggestions for wind-down routines and evidence-based sleep hygiene practices.
// design_thinking_process
1
Empathize Identified the target user group and mapped their pain points, unmet needs, and current coping behaviors around sleep.
2
Define Synthesized empathy findings into a clear problem statement: users need accessible, judgment-free sleep support that fits into a busy schedule.
3
Ideate Explored multiple chatbot concepts before converging on a guided check-in model with sleep hygiene recommendations and clear scope boundaries.
4
Prototype Built the bot on Mizou.com, crafting the persona, conversation flow, and prompt engineering from scratch with explicit responsible AI boundaries.
5
Test & Reflect Ran the bot through sample user scenarios, identified gaps, refined persona tone, and documented responsible AI design lessons learned.
Value Proposition

WellGuide demonstrates that building AI tools responsibly requires more than technical setup it requires deliberate design thinking, ethical scoping, and user empathy. This artifact shows I can do all three. For employers integrating AI into customer-facing or employee-facing contexts, that combination of technical capability and human-centered judgment is precisely what separates thoughtful AI practitioners from those who simply deploy tools without considering their impact.

// references
  • Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., & Agarwal, S. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901. https://arxiv.org/abs/2005.14165
  • IDEO.org. (2015). The field guide to human-centered design. IDEO.org/Design Kit. https://www.designkit.org/resources/1
  • Johansson-Sköldberg, U., Woodilla, J., & Çetinkaya, M. (2013). Design thinking: Past, present and possible futures. Creativity and Innovation Management, 22(2), 121–146. https://doi.org/10.1111/caim.12023
  • Mizou.com. (2024). Mizou: AI chatbot builder for education. https://mizou.com
artifact_03
◆ new

How an Agent Learns the Best Actions in Reinforcement Learning

Group Research Paper · Reinforcement Learning · Tesla FSD Case Study

A collaborative group research paper exploring how reinforcement learning agents learn optimal behavior through the observe–act–reward loop. Uses two contrasting case studies Pac-Man and Tesla Full Self-Driving to bridge foundational RL theory with a real-world engineering system operating at scale. The paper covers the credit assignment problem, policy formation, value-based learning, policy gradient methods, actor-critic architecture, and the exploration vs. exploitation tradeoff.

Reinforcement Learning Deep Q-Network (DQN) Policy Gradient Actor-Critic Tesla FSD Autonomous Driving Group Research APA 7th ed.
Objective

Explain how RL agents learn optimal actions through trial, feedback, and iteration connecting core algorithm theory to a high-stakes real-world deployment: Tesla's Full Self-Driving system.

Collaborators

Manikanta Vasana · Khaleel Abdul · Qasim Ali · Po-Chun Huang · Arvinnd Chowdary Kuchipudi · Won Woo Derek Sohn · Ashok Bollepalli · Iftekhar Ahmed Tarkati Mohammed

Case Studies

Pac-Man (DeepMind DQN): Mastered from raw pixels with zero prior knowledge.
Tesla FSD: Eight-camera state space, continuous action space, real-world policy deployment.

Key References

Sutton & Barto (2018): RL: An Introduction
Mnih et al. (2015): DQN, Nature 518
Kiran et al. (2022): Deep RL for Autonomous Driving, IEEE TITS

// core_concepts_covered
1
The Basic Loop State observation → action selection → reward receipt → state transition. The same loop runs in Pac-Man and a city intersection at rush hour.
2
Credit Assignment Problem Tracing which earlier actions caused a good outcome later. Tesla FSD must connect a speed reduction six seconds prior to a safe left turn across oncoming traffic.
3
Policy Formation The agent's learned mapping from observed states to chosen actions. No rules are programmed; they emerge from millions of training iterations.
4
Value-Based Learning & Policy Gradient DQN assigns value to state-action pairs; policy gradient directly adjusts policy parameters. Tesla uses fleet-scale real-world data to refine its policy network.
5
Actor-Critic Architecture Actor decides the action; critic scores it. Tesla's simulation infrastructure (billions of virtual miles) is built around this hybrid to learn edge cases without real-world risk.
6
Exploration vs. Exploitation Epsilon-greedy balances exploiting known-good actions with occasional random exploration. As training matures, randomness reduces and the agent settles into validated behavior.
Why This Matters to Employers

Reinforcement learning is behind some of the most commercially valuable AI systems in production today autonomous vehicles, robotics, recommendation engines, and game-playing agents. This paper demonstrates the ability to explain complex RL mechanics clearly, connect theory to real engineering systems, and collaborate in a group research context skills directly relevant to roles in ML engineering, AI research, and applied AI product development.

// references
  • Kiran, B. R., Sobh, I., Talpaert, V., Mannion, P., Al Sallab, A. A., Yogamani, S., & Pérez, P. (2022). Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(6), 4909–4926. https://doi.org/10.1109/TITS.2021.3054625
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533. https://doi.org/10.1038/nature14236
  • Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction (2nd ed.). MIT Press. https://incompleteideas.net/book/the-book-2nd.html
artifact_04
● live

Applied ML Decision-Making Under Constraints

Data Challenge Scenario Coach · Three-Scenario Interactive Session · SchoolAI

An interactive, coached AI session working through three progressively harder real-world ML data challenges covering missing data imputation in energy forecasting, data drift in a live fraud detection system, and privacy-constrained feature selection for a high-dimensional recommendation engine. Each scenario required structured reasoning, trade-off analysis, and validation design with real-time feedback from an AI coach evaluating the depth and quality of every decision.

Data Quality Missing Data Data Drift Feature Selection Privacy-Preserving ML Fraud Detection Dimensionality Reduction Critical Thinking
Objective

Demonstrate applied ML judgment beyond just conceptual knowledge by working through realistic, constraint-laden data challenges and defending decisions with clear reasoning and trade-off awareness.

Format

Platform: SchoolAI Scenario Coach
Structure: Three escalating scenarios
Evaluation: Real-time AI feedback + instructor review of recorded transcript

Skills Demonstrated

Diagnostic thinking: MAR vs MNAR analysis
Systems reasoning: Drift detection + retraining logic
Constraint navigation: Privacy, latency, quality trade-offs

Tools & Technologies

Platform: SchoolAI (AIML-500 Coach)
Concepts: PSI, KL divergence, MICE, differential privacy, embedding compression
Course: AIML-500-01N1G

// scenario_progression
1
Scenario 1: Missing Data (Energy Forecasting) A 50,000-row dataset predicting monthly energy consumption with missingness ranging from 0.5% to 30% across features, skewed toward older records. Required diagnosing the missingness mechanism (MAR vs MNAR), tiering imputation strategies by severity, designing missingness indicators as model features, and validating imputation quality through simulated holdout experiments.
2
Scenario 2: Data Drift (Real-Time Fraud Detection) A production fraud scoring system showing precision drops and false positive spikes for new merchant categories and promotional events, with a 48 to 72 hour label delay. Required distinguishing covariate drift from concept drift, selecting a single precision-based retrain trigger metric, designing a label-completeness cutoff to prevent label-delay bias, and building a short-term mitigation strategy using shadow ensembles and rule-based fallbacks.
3
Scenario 3: Privacy-Constrained Feature Selection (Recommendation Engine) A high-dimensional recommendation system with thousands of sparse categorical features, inferred sensitive attributes, privacy regulations restricting third-party signals, on-device latency requirements, and real-time trend adaptation needs. Required a full privacy audit, staged feature selection, embedding compression, adversarial debiasing, a two-tier on-device and server-side architecture, and population-aggregate trend adaptation that avoids individual data exposure.
Why This Matters to Employers

Real ML work is rarely about picking the right algorithm from a list. It is about making defensible decisions when data is messy, systems are live, constraints are real, and trade-offs are unavoidable. This artifact demonstrates exactly that kind of thinking: the ability to reason through ambiguous, multi-constraint problems out loud, defend choices under probing follow-up questions, and show genuine understanding rather than pattern-matched answers. For employers building or maintaining production ML systems, that kind of applied critical thinking is directly what they need on their teams.

// references
  • Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–407. https://doi.org/10.1561/0400000042
  • Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., & Bouchachia, A. (2014). A survey on concept drift adaptation. ACM Computing Surveys, 46(4), 1–37. https://doi.org/10.1145/2523813
  • van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). CRC Press. https://stefvanbuuren.name/fimd/
  • Vabalas, A., Gowen, E., Poliakoff, E., & Casson, A. J. (2019). Machine learning algorithm validation with a limited sample size. PLOS ONE, 14(11), e0224365. https://doi.org/10.1371/journal.pone.0224365

Artifact Reflections

Critical reflection on each artifact what I built, why it matters, and what I learned.

// artifact_01 The Evolution of Artificial Intelligence

For my first portfolio artifact, I selected the AI and ML Timeline created during Workshop 1 of AIML-500. My intended audience is future employers and collaborators in the AI/ML field professionals who want to assess not just what I know technically, but how I think, communicate, and situate myself within the broader field.

"I chose the timeline because it demonstrates something a resume cannot easily show: the ability to take a large, complex body of knowledge, organize it meaningfully, and present it in a way that is accessible to different kinds of readers."

Presenting this work for a professional audience required me to think beyond the academic context in which it was created. I added audience-facing framing to connect the historical analysis to practical, real-world decision-making shifting the question from "what did I do?" to "why would someone else care about what I did?"

I selected GitHub Pages as my portfolio platform because it signals technical credibility to employers in the AI/ML field. A portfolio hosted on GitHub demonstrates comfort with technical environments which is itself a relevant signal for the roles I am pursuing.

// artifact_02 WellGuide: Sleep Insights

For my second artifact, I chose the WellGuide: Sleep Insights chatbot because it demonstrates a completely different dimension of AI/ML competency. Where the timeline shows I can research and synthesize, WellGuide shows I can build and design. This artifact speaks more directly to employers evaluating practical, applied AI skills.

"The most important lesson from building WellGuide was that responsible AI design is not just about what the bot can do it is about what you deliberately choose not to let it do."

Customizing this artifact for a professional audience meant surfacing the design thinking process explicitly, because employers in AI integration roles care deeply about methodology not just outcomes. The five-stage walkthrough makes that process visible in a way a simple project description would not.

Focusing specifically on sleep health rather than general wellness taught me the value of a well-scoped AI tool. A narrower focus made the bot more useful, more trustworthy, and easier to design responsibly. Navigating those scope decisions is now one of my most transferable skills in AI design.

// artifact_03 How an Agent Learns the Best Actions in RL

For my third portfolio artifact, I selected the group research paper on Reinforcement Learning submitted for the 3.3 Assignment in AIML-500. My intended audience is employers and collaborators evaluating applied ML knowledge specifically, whether I can connect foundational algorithm theory to real-world engineering systems rather than just recite textbook definitions.

"The Pac-Man and Tesla FSD pairing was deliberate one is a toy environment most people recognize, the other is one of the most complex RL deployments in production. Holding both together in the same analysis forces you to see what is truly universal about the algorithm."

Working in a group of eight shaped this paper in important ways. With multiple contributors, the challenge was not producing content it was ensuring the paper read as a coherent argument rather than a collection of separate sections. My contribution focused on the structural narrative thread: the idea that Pac-Man and Tesla FSD share not just an algorithm but a philosophy of learning through experience rather than instruction.

The credit assignment problem turned out to be the most intellectually rich section to develop. It is easy to describe the basic RL loop, but explaining why connecting a decision made six seconds ago to an outcome two seconds later is genuinely hard and doing so without losing a non-specialist reader required careful sequencing. That challenge taught me something practical about communicating ML concepts: the difficulty is rarely the math; it is the causality.

// artifact_04 Applied ML Decision-Making Under Constraints

For my fourth portfolio artifact, I selected the Data Challenge Scenario Coach session completed during Workshop 6 of AIML-500. My intended audience is employers and technical hiring teams who want to see not just what I know about machine learning, but how I actually think when the problems get hard and there are no clean textbook answers in sight.

"The most honest test of whether you understand something is whether you can defend your decisions under follow-up questioning and not just state them once and move on."

What made this activity genuinely valuable for a portfolio is that it is not a static deliverable. It is a recorded thinking process. The coach pushed back on every answer, asked for single metrics instead of hedged ranges, and probed whether I could design a concrete experiment rather than describe one abstractly. That kind of pressure is much closer to what a real technical interview or project review looks like than any paper or quiz.

The three scenarios deliberately covered different failure modes in production ML covering missing data, distribution shift, and privacy constraints which meant I could not rely on a single mental framework across all three. Scenario 1 required statistical reasoning about missingness mechanisms. Scenario 2 required systems thinking about live model behavior and delayed feedback loops. Scenario 3 required navigating competing pressures simultaneously: privacy compliance, latency, recommendation quality, and real-time adaptability. Holding all of those constraints in tension and still producing a coherent strategy is a skill that only develops through practice, not memorization.

Adapting this for a professional audience meant being explicit about what the artifact actually demonstrates. A transcript of a coaching session looks informal on the surface. But what it contains is structured reasoning, trade-off acknowledgment, and willingness to be corrected and refine which is exactly the kind of thinking that separates practitioners who understand ML from those who have only studied it.

Connect With Me

Open to conversations about AI/ML, learning opportunities, and collaboration.