Harshavardhan Chittaluru

Passionate data enthusiast and adept software developer, adept at process optimization and deriving actionable insights from complex datasets.

Brain Stroke Prediction using CNN
brain figurine
brain figurine

My Projects

Brain stroke prediction using Convolutional Neural Networks (CNNs) involves the application of deep learning techniques to analyze medical imaging data, such as MRI or CT scans, to identify patterns indicative of stroke risk. CNNs excel at extracting hierarchical features from images, allowing them to learn complex relationships within medical images and predict the likelihood of a stroke occurrence. By training on a dataset of labeled brain scans, the CNN can learn to recognize subtle patterns associated with stroke risk factors, enabling early detection and intervention to prevent stroke-related complications. This approach holds promise for improving stroke diagnosis and prognosis, potentially saving lives through early intervention strategies. Here is the dataset.

Employee Attrition
Employee Attrition
Employee Attrition Prediction

Employee attrition prediction using supervised algorithms involves utilizing historical employee data to train machine learning models. These models are then used to predict the likelihood of employees leaving the organization in the future. By analyzing factors such as job satisfaction, salary, tenure, performance ratings, and other relevant variables, these algorithms can identify patterns and trends associated with employee turnover. Supervised algorithms like logistic regression, decision trees, random forests, or gradient boosting can be employed to build predictive models that help organizations proactively address retention issues, optimize workforce management strategies, and ultimately reduce turnover rates. Here is the dataset.

Analysis
Analysis
Analysis on GSS Insights Dataset

I meticulously analyzed the General Social Survey (GSS) dataset, employing advanced statistical techniques to unveil correlations between socio-demographic factors and health outcomes. Through rigorous data cleaning and visualization using tools like boxplots and ggplot, I enhanced clarity and insight into the dataset. Employing predictive models such as Random Forest and Poisson Regression, I identified pivotal demographic factors influencing health conditions. Utilizing R, I navigated through the complexities of the data, extracting meaningful insights and informing decisions. My approach encompassed a thorough examination of the data landscape, ensuring robustness and reliability in the analysis. Through this process, I contributed valuable insights into the intricate relationship between socio-demographic variables and health outcomes, facilitating informed decision-making.

Data
Data
Seminar on Data Deduplication using Hadoop

In the seminar on Data Deduplication using Hadoop, I elucidated the innovative approach of eliminating redundant data to optimize storage and processing efficiency. Exploring the fundamental principles of Hadoop's distributed computing framework, I highlighted its pivotal role in handling vast datasets and implementing deduplication techniques at scale. Through real-world examples and case studies, attendees gained insights into the significance of data deduplication in various industries, from finance to healthcare. Furthermore, I delved into the technical intricacies of Hadoop's MapReduce paradigm and its application in identifying and removing duplicate data chunks. Overall, the seminar provided a comprehensive understanding of how Hadoop empowers organizations to streamline data management processes and drive cost-effective solutions through deduplication strategies.

About Me

I'm really enthusiastic about diving into the exciting fields of data analysis and data science. I want to use my knack for breaking down information and my technical skills to uncover important insights and come up with solutions that make a difference. Alongside my love for data, I'm pretty good at web development too, which means I can create websites and apps that are easy for people to use. Right now, I'm learning a lot as a Student Assistant at the NYS Comptroller's office. I'm helping to make things run smoother and keep data safe. Before this, I sharpened my data analysis skills at Axtria and my software chops as a Java Developer at Infosys. Plus, I'm not just a techie - I know how to lead and communicate well, which makes working in teams easier. I've got a bunch of tools in my toolbox, from different programming languages to machine learning and deep learning. With all this, I'm ready to take on the ever-changing world of tech and help make decisions based on data.

My PIC
My PIC
My Second Pic
My Second Pic