AIML-500 · Indiana Wesleyan University
AI/ML Graduate Student
Machine Learning Fundamentals · Instructor: Marvin Hunt
Building expertise at the intersection of artificial intelligence, data-driven decision making, and human-centered design. Passionate about AI that is not just technically sound but genuinely useful.
background
I am a Software Engineer and graduate student in Machine Learning Fundamentals at Indiana Wesleyan University, developing skills across the technical and leadership dimensions of AI/ML work. My focus is on understanding not just how AI systems are built, but how they are adopted, led, and sustained in real organizational contexts.
In my professional role, I work on backend systems and enterprise-scale microservice architecture. My current project involves migrating legacy COBOL systems to Java Spring Boot microservices, handling high-volume data pipelines, cloud storage integration, and container orchestration on Kubernetes. This hands-on engineering experience gives me a practical lens that I bring directly into my AI/ML studies.
I bring a cross-functional mindset to AI work bridging the gap between technical development and the people and processes that determine whether AI actually delivers value. I believe that responsible, human-centered AI is not a constraint on innovation it is a condition for it.
I bridge the gap between machine learning theory and real organizational practice — translating complex AI concepts into decisions that non-technical stakeholders can act on, while maintaining the technical depth that engineering teams respect. As a Software Engineer with hands-on experience in backend systems, microservice architecture, and enterprise-scale data pipelines, combined with graduate-level AI/ML studies, I bring both the engineering credibility and the communication skills that most ML practitioners develop separately, if at all. My goal is to help organizations not just adopt AI, but adopt it responsibly, sustainably, and in ways that actually deliver value to the people it is meant to serve.
work samples
A curated collection of work from AIML-500, demonstrating research ability, technical skill, and human-centered thinking.
This portfolio is designed for AI/ML hiring managers, technical recruiters, and team leads evaluating candidates who can operate at the intersection of technical depth and organizational leadership. It is equally relevant to academic collaborators and program faculty assessing applied learning outcomes. Each artifact is framed not just as a course deliverable but as evidence of practical thinking — showing how I approach ambiguous problems, communicate across audiences, and build AI solutions with responsibility and rigor. If you are looking for someone who understands both the algorithm and the impact it has on real people and organizations, this portfolio is for you.
This artifact traces the full arc of Artificial Intelligence development from Alan Turing's foundational 1950 paper and the Dartmouth Conference of 1956, through two AI Winters caused by hardware limitations and funding collapses, to the deep learning revolution of 2012 and the emergence of large language models and generative AI in the 2020s.
Trace the historical development of AI, identify key milestones, and analyze the forces technical, financial, and organizational that have shaped the field from 1950 to the present generative AI era.
Original research and synthesis across multiple credible academic and industry sources. Independent analysis structured as a chronological timeline with contextual commentary at each milestone.
Research: Academic databases, IWU library
Writing: Microsoft Word, APA 7th ed.
AI Support: Claude (Anthropic)
Goes beyond listing events to explain why the field stalled and accelerated connecting historical patterns to practical decisions AI practitioners face today.
Understanding AI history is directly relevant to anyone making decisions about AI adoption, investment, or strategy. Leaders who understand why the field stalled before are better positioned to avoid repeating those conditions. This artifact demonstrates research capability, analytical thinking, and the ability to communicate complex technical history to both technical and non-technical audiences.
WellGuide: Sleep Insights is an AI-powered wellness chatbot focused specifically on sleep health designed and deployed on Mizou.com, a purpose-built platform for creating responsible, educational AI chatbots. Built from scratch using a structured design thinking process from empathy mapping through to a working prototype tested with real user interactions. It demonstrates how generative AI can be shaped into a purposeful, human-centered tool with a clear, responsible scope.
Design and deploy a responsible AI chatbot providing accessible, supportive sleep wellness guidance demonstrating practical application of design thinking and responsible AI principles in a real deployment environment.
Graduate students and early-career professionals navigating academic pressure, work-life balance, and poor sleep patterns users who need accessible support without clinical overhead.
Platform: Mizou.com (AI bot builder)
Method: Stanford d.school Design Thinking
Framework: Prompt engineering & persona design
Docs: Microsoft Word, APA 7th ed.
Professionals who can design responsible AI experiences not just deploy them will be in high demand. WellGuide demonstrates exactly that combination of capability and judgment.
WellGuide demonstrates that building AI tools responsibly requires more than technical setup it requires deliberate design thinking, ethical scoping, and user empathy. This artifact shows I can do all three. For employers integrating AI into customer-facing or employee-facing contexts, that combination of technical capability and human-centered judgment is precisely what separates thoughtful AI practitioners from those who simply deploy tools without considering their impact.
A collaborative group research paper exploring how reinforcement learning agents learn optimal behavior through the observe–act–reward loop. Uses two contrasting case studies Pac-Man and Tesla Full Self-Driving to bridge foundational RL theory with a real-world engineering system operating at scale. The paper covers the credit assignment problem, policy formation, value-based learning, policy gradient methods, actor-critic architecture, and the exploration vs. exploitation tradeoff.
Explain how RL agents learn optimal actions through trial, feedback, and iteration connecting core algorithm theory to a high-stakes real-world deployment: Tesla's Full Self-Driving system.
Manikanta Vasana · Khaleel Abdul · Qasim Ali · Po-Chun Huang · Arvinnd Chowdary Kuchipudi · Won Woo Derek Sohn · Ashok Bollepalli · Iftekhar Ahmed Tarkati Mohammed
Pac-Man (DeepMind DQN): Mastered from raw pixels with zero prior knowledge.
Tesla FSD: Eight-camera state space, continuous action space, real-world policy deployment.
Sutton & Barto (2018): RL: An Introduction
Mnih et al. (2015): DQN, Nature 518
Kiran et al. (2022): Deep RL for Autonomous Driving, IEEE TITS
Reinforcement learning is behind some of the most commercially valuable AI systems in production today autonomous vehicles, robotics, recommendation engines, and game-playing agents. This paper demonstrates the ability to explain complex RL mechanics clearly, connect theory to real engineering systems, and collaborate in a group research context skills directly relevant to roles in ML engineering, AI research, and applied AI product development.
An interactive, coached AI session working through three progressively harder real-world ML data challenges covering missing data imputation in energy forecasting, data drift in a live fraud detection system, and privacy-constrained feature selection for a high-dimensional recommendation engine. Each scenario required structured reasoning, trade-off analysis, and validation design with real-time feedback from an AI coach evaluating the depth and quality of every decision.
Demonstrate applied ML judgment beyond just conceptual knowledge by working through realistic, constraint-laden data challenges and defending decisions with clear reasoning and trade-off awareness.
Platform: SchoolAI Scenario Coach
Structure: Three escalating scenarios
Evaluation: Real-time AI feedback + instructor review of recorded transcript
Diagnostic thinking: MAR vs MNAR analysis
Systems reasoning: Drift detection + retraining logic
Constraint navigation: Privacy, latency, quality trade-offs
Platform: SchoolAI (AIML-500 Coach)
Concepts: PSI, KL divergence, MICE, differential privacy, embedding compression
Course: AIML-500-01N1G
Real ML work is rarely about picking the right algorithm from a list. It is about making defensible decisions when data is messy, systems are live, constraints are real, and trade-offs are unavoidable. This artifact demonstrates exactly that kind of thinking: the ability to reason through ambiguous, multi-constraint problems out loud, defend choices under probing follow-up questions, and show genuine understanding rather than pattern-matched answers. For employers building or maintaining production ML systems, that kind of applied critical thinking is directly what they need on their teams.
portfolio reflection
Critical reflection on each artifact what I built, why it matters, and what I learned.
For my first portfolio artifact, I selected the AI and ML Timeline created during Workshop 1 of AIML-500. My intended audience is future employers and collaborators in the AI/ML field professionals who want to assess not just what I know technically, but how I think, communicate, and situate myself within the broader field.
"I chose the timeline because it demonstrates something a resume cannot easily show: the ability to take a large, complex body of knowledge, organize it meaningfully, and present it in a way that is accessible to different kinds of readers."
Presenting this work for a professional audience required me to think beyond the academic context in which it was created. I added audience-facing framing to connect the historical analysis to practical, real-world decision-making shifting the question from "what did I do?" to "why would someone else care about what I did?"
I selected GitHub Pages as my portfolio platform because it signals technical credibility to employers in the AI/ML field. A portfolio hosted on GitHub demonstrates comfort with technical environments which is itself a relevant signal for the roles I am pursuing.
For my second artifact, I chose the WellGuide: Sleep Insights chatbot because it demonstrates a completely different dimension of AI/ML competency. Where the timeline shows I can research and synthesize, WellGuide shows I can build and design. This artifact speaks more directly to employers evaluating practical, applied AI skills.
"The most important lesson from building WellGuide was that responsible AI design is not just about what the bot can do it is about what you deliberately choose not to let it do."
Customizing this artifact for a professional audience meant surfacing the design thinking process explicitly, because employers in AI integration roles care deeply about methodology not just outcomes. The five-stage walkthrough makes that process visible in a way a simple project description would not.
Focusing specifically on sleep health rather than general wellness taught me the value of a well-scoped AI tool. A narrower focus made the bot more useful, more trustworthy, and easier to design responsibly. Navigating those scope decisions is now one of my most transferable skills in AI design.
For my third portfolio artifact, I selected the group research paper on Reinforcement Learning submitted for the 3.3 Assignment in AIML-500. My intended audience is employers and collaborators evaluating applied ML knowledge specifically, whether I can connect foundational algorithm theory to real-world engineering systems rather than just recite textbook definitions.
"The Pac-Man and Tesla FSD pairing was deliberate one is a toy environment most people recognize, the other is one of the most complex RL deployments in production. Holding both together in the same analysis forces you to see what is truly universal about the algorithm."
Working in a group of eight shaped this paper in important ways. With multiple contributors, the challenge was not producing content it was ensuring the paper read as a coherent argument rather than a collection of separate sections. My contribution focused on the structural narrative thread: the idea that Pac-Man and Tesla FSD share not just an algorithm but a philosophy of learning through experience rather than instruction.
The credit assignment problem turned out to be the most intellectually rich section to develop. It is easy to describe the basic RL loop, but explaining why connecting a decision made six seconds ago to an outcome two seconds later is genuinely hard and doing so without losing a non-specialist reader required careful sequencing. That challenge taught me something practical about communicating ML concepts: the difficulty is rarely the math; it is the causality.
For my fourth portfolio artifact, I selected the Data Challenge Scenario Coach session completed during Workshop 6 of AIML-500. My intended audience is employers and technical hiring teams who want to see not just what I know about machine learning, but how I actually think when the problems get hard and there are no clean textbook answers in sight.
"The most honest test of whether you understand something is whether you can defend your decisions under follow-up questioning and not just state them once and move on."
What made this activity genuinely valuable for a portfolio is that it is not a static deliverable. It is a recorded thinking process. The coach pushed back on every answer, asked for single metrics instead of hedged ranges, and probed whether I could design a concrete experiment rather than describe one abstractly. That kind of pressure is much closer to what a real technical interview or project review looks like than any paper or quiz.
The three scenarios deliberately covered different failure modes in production ML covering missing data, distribution shift, and privacy constraints which meant I could not rely on a single mental framework across all three. Scenario 1 required statistical reasoning about missingness mechanisms. Scenario 2 required systems thinking about live model behavior and delayed feedback loops. Scenario 3 required navigating competing pressures simultaneously: privacy compliance, latency, recommendation quality, and real-time adaptability. Holding all of those constraints in tension and still producing a coherent strategy is a skill that only develops through practice, not memorization.
Adapting this for a professional audience meant being explicit about what the artifact actually demonstrates. A transcript of a coaching session looks informal on the surface. But what it contains is structured reasoning, trade-off acknowledgment, and willingness to be corrected and refine which is exactly the kind of thinking that separates practitioners who understand ML from those who have only studied it.
get in touch
Open to conversations about AI/ML, learning opportunities, and collaboration.