Swapnil Bhattacharya
Portfolio
Neuroimaging Correlation between Autism and Hyperactivity by Statistical Methods
● Conducted a research project analyzing the similarity between Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD), supporting the hypothesis that autistic individuals may exhibit hyperactivity. ● Processed and analyzed MRI datasets from ABIDE (ASD) and ADHD-200 (ADHD), performing data preprocessing steps including brain extraction, eddy current correction, motion correction, and DTI fitting; enhanced image quality by increasing the signal-to-noise ratio using the Noise2Void denoising method. ● Calculated neuroimaging metrics such as voxel counts, fractional anisotropy, Otsu’s threshold, white matter hyperintensities, structural similarity, and asymmetry index, validating the hypothesis and contributing foundational research to improve the diagnosis and treatment of autism. Awards & achievements: ● Commended for exceptional contributions to the project, collaborating with the Center for Human Brain Health (CHBH) to explore advancements in Medical Science, Data Science, and Psychology. ● Received accolades for the Wikipedia Recommendation System project, which was highly appreciated and is under consideration for implementation by Wikipedia. ● Successfully co-founded an AI startup, TaleTech, which secured an investment of £100,000, demonstrating entrepreneurial spirit and innovative thinking in the tech industry.
Automating Wikipedia’s Manually Created Recommendation System
● Developed an automated recommendation system for Wikipedia’s 'See Also' sections using BERT embeddings and cosine similarity, reducing manual curation efforts. ● Processed and analyzed a massive dataset of 6 billion article interactions, optimizing computational efficiency by sampling 250,000 records and filtering based on article quality, size, views, and shared count. ● Achieved an 88% user satisfaction rate from 558,000 survey participants, demonstrating the effectiveness and user acceptance of the automated system.
Classification of Speech Emotion using Long Short Term Memory (LSTM)
● Developed and implemented a deep learning model leveraging LSTM networks for robust speech emotion recognition, processing over 2,800 audio samples with high accuracy. ● Optimized feature extraction pipelines using librosa and other Python libraries to enhance model performance and ensure precise emotion classification across diverse datasets. ● Integrated machine learning workflows with efficient TensorFlow-based training, achieving scalable deployment for real-time audio sentiment analysis applications.