Skip to content

mcxraider/interview-preparation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Data Science/ Data Analyst Interviews

Questions-answers/ materials I use to study for data science interviews
Some credits to: this repo

Note: Do contribute with PRs! Repo is very messy atm, sorry.

Coding Interviews:

Video Materials:

  1. Statquest - Machine Learning/ Statistics/ Deep Learning
  2. 3b1b - Math for Machine Learning
  3. Andrej kaparthy - LLM Legend
  4. Indently - Good coding practices

Reading Materials:

Supervised Learning Algorithms:

  1. Linear Regression
  2. Logistic Regression
  3. Decision Trees
  4. Random Forest
  5. Support Vector Machines (SVM)
  6. K-Nearest Neighbors (KNN)
  7. Naive Bayes
  8. Gradient Boosting Machines (GBM)
  9. AdaBoost
  10. XGBoost

Unsupervised Learning Algorithms:

  1. K-Means Clustering
  2. Hierarchical Clustering
  3. DBSCAN (Density-Based Spatial Clustering of Applications with Noise)
  4. Principal Component Analysis (PCA)
  5. t-SNE (t-distributed Stochastic Neighbor Embedding)

Reinforcement Learning Algorithms:

  1. Q-Learning
  2. Deep Q-Networks (DQN)
  3. Policy Gradient Methods
  4. Proximal Policy Optimization (PPO)
  5. SARSA (State-Action-Reward-State-Action)
    • A reinforcement learning algorithm that updates policies based on current actions.

Neural Networks and Deep Learning Algorithms:

  1. Artificial Neural Networks (ANN)
    • The basic neural network model used for various tasks.
  2. Convolutional Neural Networks (CNN)
  3. Recurrent Neural Networks (RNN)
  4. Long Short-Term Memory Networks (LSTM)
  5. Transformer Networks
    • A deep learning architecture primarily used in NLP tasks (e.g., BERT, GPT).
  6. Generative Adversarial Networks (GANs)

Recommendation Systems:

  1. https://towardsdatascience.com/recommender-systems-a-complete-guide-to-machine-learning-models-96d3f94ea748
  2. Collaborative Filtering:
  3. Content-Based Filtering:

Ensemble Learning Algorithms:

  1. Bagging

    • Combines the predictions of several base models (e.g., Random Forest).
  2. Boosting

    • Sequentially builds models that correct the errors of previous models (e.g., XGBoost, AdaBoost).
  3. Stacking

    • Combines multiple models by training a meta-model on their predictions.

About

technical interviews

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published