More on my personal background: I am a U.S. citizen who grew up and was educated in Singapore. Besides research, I worked as a software engineer at Salesforce and at several startups, ranging from deep tech (quantum computing) to B2C companies based in Southeast Asia and the United States. I also studied abroad at the University of Oxford (Magdalen College) in Fall 2022, where I studied graph representation learning and philosophy of mind, and tried my hand at rowing! I also enjoy playing tennis, hiking, reading (+ occasionally writing) science fiction, and brush calligraphy.

Below is an assortment of works that summarize my academic interests.


Winter & Spring 2024: CS 224N (Natural Language Processsing with Deep Learning) Graduate Course Assistant
Taught by Prof. Tatsunori Hashimoto / Prof. Diyi Yang (Winter 2024) and Prof. Christopher Manning (Spring 2024)
Topics: Natural Language Processing, Machine Learning, Deep Learning

Fall 2023: CS 157 (Computational Logic) Graduate Course Assistant
Taught by Prof. Michael Genesereth
Topics: Propositional Logic, Relational Logic, Functional Logic

Computational Projects

Today Years Old: Adapting Language Models to Word Shifts [paper] [poster] [code]
Final report, poster, and code for Stanford’s CS 224N: Natural Language Processing with Deep Learning (Winter 2023)
I finetuned GPT-2 and RoBERTa to predict word embeddings for novel lexical items given their definitions. The model was finetuned via supervised learning on word embeddings for lexical items in a regular dictionary, using embeddings from the pretrained models off-the-shelf as ground-truth embeddings. The resultant model as used to predict word embeddings for Urban Dictionary words given their definitions.

Topics: Natural Language Processing, Machine Learning, Supervised Learning, Domain Adaptation

A Shot in the Dark: Modeling Improved Zero-Shot and Few-Shot Transfer Learning with Self-Supervised Models for Sentiment Classification [paper] [poster]
Final report and poster for Stanford’s CS 229: Machine Learning (Spring 2022)
We modeled transfer learning with self-supervised embeddings and supervised models at various scales to optimize model performance on sentiment classification tasks compared to DistilBERT.

Topics: Natural Language Processing, Machine Learning, Self-Supervised Learning, Transfer Learning

Model Predictive Curiosity [paper] [poster]
Final report and poster for Stanford’s PSYCH 240A: Curiosity in Artificial Intelligence (Spring 2022)
We proposed Model Predictive Curiosity (MPCu), a framework that backpropagates on curiosity values predicted by a forward dynamics model to select curiosity-maximizing actions. We demonstrated the capability of MPCu to optimize for high-curiosity action values and enrich multi-object interactions in Box2D environment.

Topics: Curiosity-Based Models, Model-Based Reinforcement Learning, Representation Learning, Self-Supervised Learning

Machine Learning-based platform using iBeacon Sensors for Product Location and Indoor Navigation to Improve Consumer Retail Experience
High school research engineering project (2018-2019)
We trained an automatic speech recognition engine contextualized to Singaporean accents and terminology, which we incorporated into a mobile app to help consumers navigate local supermarkets with verbal queries. During testing, we combined the mobile app with a lattice formation of bluetooth low-energy sensors in a convenience store, which identified the user’s position relative to the intended item, then generated and displayed the shortest path.

Topics: Natural Language Processing, Recommendation Systems, Machine Learning, Indoor Geolocation, Bluetooth Sensor Systems

Philosophy Papers

The Missing Piece: Dispelling the Mystery of Introspective Illusion [paper]
Final paper for Stanford’s PHIL 186: Philosophy of Mind (Spring 2023)
I address the question of why illusionism, the view that phenomenal characteristics of mental states are illusory, is so hard to stomach. Beyond explaining how the illusion of phenomenality arises, a robust theory of illusionism must adequately explain the incredible strength of the illusion and the difficulty of freeing oneself from the grip of this enduring intuition. Frankish (2016) attempts to address the former but not the latter, and I argue that explaining the potency of phenomenal illusions is the crucial missing piece for a sound illusionist theory. I present two main desiderata for a positive theory of illusionism by drawing connections to related theories of consciousness, namely global workspace theory (Dennett, 2001) and Buddhist philosophy.

Topics: Consciousness, Illusionism, Higher-Order Thought Theory, Wittgenstein

Consciousness, Phenomenality, and the Representational Layer [paper]
Final paper for Stanford’s SYMSYS 202: Theories of Consciousness (Winter 2023)
I propose that metacognition on top of the representational layer, beyond mere possession of representational states, is critical for consciousness. I also distinguish between lower-level sensory states used only as intermediates in perceptual processing (e.g., edge detection, shape from shading), and higher-level sensory states (outputs of perceptual processing), and explore the introduction of phenomenality in this process. I finally discuss the implications of the above proposals for the functional and evolutionary roles of consciousness.

Topics: Consciousness, Representationalism, Phenomenality, Higher-Order Thought Theory, Global Workspace Theory

Philosophy of Mind: Wittgenstein, The Unconscious Mind, and Self-Knowledge [paper]
Collection of essays for OSPOXFRD 199: Philosophy of Mind (Fall 2022)
A compilation of essays written for my Directed Reading in Philosophy of Mind, during my quarter abroad at the University of Oxford. Each essay was written to prepare for biweekly tutorial discussions over the quarter. The four essay titles and topics are as follows:
Essay 1: Might your new philosophy tutor be a non-conscious ‘zombie’ for all you know? (Topic: Other Minds)
Essay 2: Might you see red where I see blue? (Topic: The Privacy of Experience)
Essay 3: What is it to make the unconscious conscious? (Topic: The Unconscious Mind)
Essay 4: How is knowledge of my own states of mind possible? (Topic: Self-Knowledge)

Topics: Philosophy of Mind, Philosophy of Psychology, Wittgenstein, Private Language Argument, Consciousness, Self-Knowledge

Predictive Processing: Efficiently processing high-dimensional, multimodal inputs [paper]
Final paper for Stanford’s SYMSYS 205: The Philosophy and Science of Perception (Spring 2022)
I argue for the plausibility of the predictive processing framework over the standard bottom-up model of perception, especially in the context of efficiently processing high-dimensional multimodal inputs, where the qualitative space of each modality has unique dimensionality and structure.

Topics: Multimodal Perception, Perceptual Cognition, Cognitive Processing

Large Language Models: Intelligence, Understanding, and Intentionality [paper]
Final paper for Stanford’s SYMSYS 207: Conceptual Issues in Cognitive Neuroscience (Fall 2021)
I argue that modern large language models (LLMs) cannot achieve strong intelligence. LLMs do not learn quickly and flexibly, nor do they employ heuristics for inference-making in a manner that an intelligent system would. Furthermore, LLMs have limited capacity for understanding beyond symbol manipulation, and are purely reactional systems that lack intentionality.

Author’s Note (March 2023): This paper was written before the release of ChatGPT and GPT-4 (or GPT-x, depending on how far in the future you’re reading this), and in hindsight, I acknowledge this paper perhaps does not give sufficient credit to the impressive emergent behaviors observed in LLMs. However, my general stance towards (purely language-based) LLMs are still largely aligned with this paper, and another more recent work that articulates views I am sympathetic to is Shanahan (2022). That said, there are many cool developments expanding upon LLMs (like vision-language models, or grounded language models more generally) that I’m excited about!
Topics: Natural Language Processing, Artificial Intelligence, Natural Language Understanding, Intentionality

Mathematics Papers

Asymmetric Processes [paper]
Research paper for Stanford’s MATH 101: Math Discovery Lab (Winter 2024)
This paper considers two models of asymmetric processes corresponding to Markov processes. We study the behavior of two different models at equilibrium: first where particle number stays constant, and second where particles enter and exit the model at certain rates. We observe probability distributions of particle configurations at equilibrium, as well as other properties such as average particle speed, by considering properties such as irreducibility, aperiodicity, and double stochasticity.

Topics: Markov Processes, Markov Chains, Probability Theory

Infinite Coin Tosses [paper]
Research paper for Stanford’s MATH 101: Math Discovery Lab (Winter 2024)
This paper explores cumulative distribution functions for infinite coin tosses, parameterized by the probability p of flipping heads. We graph the outcomes of simulated coin flips and study properties of the cumulative distribution function, analyzing its pathological behavior in terms of continuity, differentiability, and arc length.

Topics: Probability Theory, Continuous Random Variables

Hilbert’s 10th Problem [paper]
Research paper for Stanford’s PHIL 152: Computability and Logic (Spring 2023)
This paper introduces and proves Hilbert’s 10th Pproblem: determining the solvability of Diophantine equations over integers.

Computability Theory

Technical Reports

A Proposal for Building Safety Benchmarking Services in CAIS systems [paper]
Final report for Stanford Existential Risk Initiative’s AI research program (Spring 2021)
I propose a protocol encompassing safety benchmarking services for CAIS systems, ranging from pre-deployment safety benchmarks that are applied during model training (transparency tools, systems enabling robust and safe exploration, and performance when subject to adversarial policies) to post-deployment safety benchmarks that are applied during model application (monitoring systems and trip wires to ensure agent behavior is within safety standards and expectations).

Topics: AI Safety, Benchmarking Tools, AI Existential Risk

Implications of the Comprehensive AI Services Framework on AI Safety Research [paper]
Final report for Stanford Existential Risk Initiative’s AI research program (Winter 2021)
I argue that developing powerful AI systems in line with the Comprehensive AI Systems (CAIS) framework outlined in Eric K. Drexler’s Reframing Superintelligence (2019) is not just likely but should be encouraged, due to the potential for enhanced safety measures to mitigate AI existential risk.

Topics: AI Safety, Hierarchical Reinforcement Learning, AI Existential Risk

Autonomous Vehicles: From Vision to Reality [paper]
Final paper for Stanford’s CS 56N: Great Discoveries and Inventions in Computing, taught by Prof. John Hennessey (Fall 2020)
We provide a high-level overview of current developments in the field of autonomous driving, amd look into the technological hurdles that lie ahead in enhancing the security of autonomous systems and implementation of the system at scale. We analyze challenges in consumer safety and securing autonomous systems against unwanted exploitation from a technical perspective, with additional commentary on the ethical and policy implications of these devlopments, to make projections about the future of autonomous vehicles.

Topics: Autononous Driving, Computer Vision, LiDAR/RADAR Sensor Systems, AI Safety


The Future of Human-Machine Interaction: Keeping Humans in the Loop [paper]
Final paper for Stanford’s OSPOXFRD 29: Artificial Intelligence & Society (Fall 2022)
The doomsday ending that humans will be demolished in the fierce intelligence competition with AI systems, while remarkably enduring, is a narrow view that distracts us from active measures that can be taken in the present day. I assert that a key tenet of AI development going forward should be keeping humans in the loop. Distinguishing between non-immediate decision making (e.g., data analytics and robotics) and time-sensitive, safety-critical decision making (e.g., autonomous vehicles and aircraft) is key to understanding how to best facilitate human-AI collaboration in each case. I also evaluate three technical research areas that facilitate human-in-the-loop AI development and deployment.

Topics: Human-AI Interaction, AI Safety, Human-In-The-Loop Development, Decision-Making

When Worlds Collide: Challenges and Opportunities in Virtual Reality [paper] [publication]
Final paper for Stanford’s HISTORY 44Q: Gendered Innovations in Science, Medicine, Engineering, and Environment (Fall 2021)
Published in peer-reviewed journal, Embodied: The Stanford Undergraduate Journal of Feminist, Gender, and Sexuality Studies

I explore virtual reality (VR) software applications that contain discriminatory content and promote harassment behaviors towards historically underrepresented communities, and identify innovation processes to reframe VR applications so that they promote gender and social equality. I also explore design choices in VR hardware that tend to exclude females. To address this, I propose a better sex balance in research participants is needed to rethink reference models for VR hardware, leading to more sex-sensitive VR headsets.

Topics: Virtual Reality, Gendered Innovations