0% found this document useful (0 votes)
47 views14 pages

final DSM ppt

The document discusses optimization in data science, detailing its types, techniques, and applications in machine learning and real-world scenarios. It highlights the importance of optimization for improving model accuracy and decision-making, along with tools and libraries available for implementation. The conclusion emphasizes the necessity of selecting appropriate optimization methods based on specific problem types.

Uploaded by

csraghavi6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views14 pages

final DSM ppt

The document discusses optimization in data science, detailing its types, techniques, and applications in machine learning and real-world scenarios. It highlights the importance of optimization for improving model accuracy and decision-making, along with tools and libraries available for implementation. The conclusion emphasizes the necessity of selecting appropriate optimization methods based on specific problem types.

Uploaded by

csraghavi6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

DATA SCIENCE AND MANAGEMENT(MCS102)


TOPIC: OPTIMIZATION AND
DATA SCIENCE PROBLEM SOLVING

GUIDED BY:
DR.REKHA B VENKATAPUR
HEAD OF DEPARTMENT
K S INTITUE OF TECHNOLOGY
KANAKAPURA ROAD, BENGALORE
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
DATA SCIENCE AND
MANAGEMENT(MCS102)
TOPIC: OPTIMIZATION

PRESENTED BY:
GUIDED BY:
DEEPTHI K - 1KS24SCS02
NAYANA R – 1KS24SCS04
DR.REKHA B VENKATAPUR
INDEX(TABLE OF CONTENTS
• Introduction
• Types of Optimization
• Optimization Techniques
• Optimization in Machine Learning
• Real-World Applications of Optimization
• Tools and Libraries for Optimization
• conclusion
Introduction

• Optimization is the process of selecting the best solution from a set of possible
options, given a set of constraints.
• In data science, optimization helps improve model accuracy, reduce
computational costs, and enhance decision-making.
• Used in machine learning, artificial intelligence, operations research, and
business analytics.
• The goal of optimization is to minimize or maximize an objective function.
• Example: Minimizing prediction error in a machine learning model or
maximizing profits in a business scenario.
Types of Optimization

• Unconstrained Optimization: Unconstrained optimization refers to optimization problems


where there are no explicit restrictions on the values that the decision variables can take. The
objective function is free to vary without any external constraints.
 Applications of Unconstrained Optimization Machine learning:
• Training models by minimizing loss functions.
• Financial modeling: Maximizing profit without constraints on investments.
• Physics simulations: Finding equilibrium states of physical systems.
• Constrained Optimization: Constrained optimization problems involve restrictions on the
values that the decision variables can take. These constraints can be equalities or inequalities.
 Applications of Constrained Optimization:
• Engineering: Optimizing material usage under weight constraints.
• Operations research: Minimizing costs with resource constraints.
• Machine learning: Training models with fairness or interpretability constraints
 Linear vs. Nonlinear Optimization:
• Linear: It deals with both objective function and constraints are linear.
 Applications of Linear Optimization:
• Transportation Planning: Finding the best shipping routes.
• Resource Allocation: Assigning resources efficiently.
• Manufacturing: Optimizing production schedules.
• Nonlinear: Nonlinear optimization deals with problems where either the objective function or at
least one constraint is nonlinear.
 Applications of Nonlinear Optimization:
• Machine Learning: Optimizing neural networks.
• Finance: Portfolio optimization with risk-return trade-offs.
• Engineering: Designing aerodynamic shapes.
Optimization Techniques

• Choosing the right optimization technique depends on the nature of the problem.
• In real-world applications, a combination of these methods is often used.
 Gradient Descent:
• Gradient Descent is an iterative optimization algorithm used to minimize a function by updating
parameters in the direction of the negative gradient. It is widely used in machine learning for
training models by optimizing loss functions.
 Applications of Gradient Descent
• Training deep learning models.
• Optimizing cost functions in regression models.
• Neural network weight updates.
 Convex Optimization
• Convex optimization deals with minimizing convex functions over convex sets. A function is
convex if a line segment between any two points on the function lies above or on the function
itself.
 Methods to Solve Convex Optimization Problems
• Gradient Descent: For differentiable functions.
• Interior-Point Methods: Efficiently solve convex constraints.
 Integer Programming (IP)
• Integer Programming (IP) is a special type of optimization where some or all decision
variables must take integer values.
 Types of Integer Programming
• Pure Integer Programming: All variables must be integers.
• Mixed Integer Programming (MIP): Some variables are integers, others are continuous.
• Binary Integer Programming: Variables can only take 0 or 1 values.
• Applications of Integer Programming:
• Logistics and supply chain (e.g., vehicle routing).
• Scheduling and workforce planning (e.g., employee shift assignments).
• Network design (e.g., optimizing network connections)
Optimization in Machine Learning
• Machine Learning models heavily rely on optimization to improve accuracy, reduce overfitting, and
enhance performance.
 Below are key optimization areas:
1. Loss Function Minimization
• The goal of most machine learning models is to minimize loss, which measures how far predictions
are from actual values.
• Gradient Descent for Loss Minimization:
2. Hyperparameter Tuning
• Hyperparameters control the behavior of machine learning models and can significantly impact
performance.
 Optimization Techniques:
• Grid Search – Tries all parameter combinations.
• Random Search – Randomly selects parameter combinations.
• Bayesian Optimization – Uses past results to predict the next best parameters.
• Hyperparameter Tuning Example (Grid Search with Random Forest) python
3. Feature Selection & Dimensionality Reduction
• Reducing the number of features improves model performance and prevents overfitting.
 Techniques:
• Principal Component Analysis (PCA) – Reduces dimensionality while retaining variance.
• Lasso Regression (L1 Regularization) – Selects important features by shrinking coefficients of
irrelevant ones to zero.
• Feature Selection with Lasso Regression:
Real-World Applications of Optimization
 Supply Chain Optimization:
• Supply chain optimization focuses on improving the efficiency of logistics, inventory management, and
transportation to minimize costs while maintaining high service levels.
• Example: Amazon optimizes its supply chain using machine learning and predictive analytics to ensure same-
day or next-day delivery by selecting the best fulfillment centers and shipping routes.
 Healthcare Decision-Making:
• Optimization in healthcare helps in improving patient care, reducing costs, and enhancing hospital resource
allocation.
• Example: Hospitals use AI-based scheduling systems to allocate ICU beds, reducing patient waiting times and
improving emergency response efficiency.
 AI Model Training & Tuning
• Optimizing AI models involves reducing training time, improving accuracy, and ensuring efficient computational
resource usage.
• Example: Google’s TensorFlow leverages distributed training and hardware acceleration (TPUs & GPUs) to
optimize deep learning model performance, enabling faster AI applications like Google Translate and speech
recognition.
Tools and Libraries for Optimization
 SciPy.optimize –The scipy.optimize module provides functions for solving mathematical optimization
problems, including linear programming, nonlinear optimization, and root-finding.
 TensorFlow & PyTorch – TensorFlow and PyTorch provide built-in optimizers like Adam, RMSprop, and SGD
to optimize deep learning models during training.
 Scikit-Optimize – Scikit-Optimize (skopt) is a library for Bayesian optimization, helping in automated
hyperparameter tuning of machine learning models.
 Gurobi & CPLEX – Gurobi and CPLEX are high-performance solvers for linear programming (LP), mixed-
integer programming (MIP), and quadratic programming (QP).
conclusion
• Optimization is essential for efficiency and performance.
• Choosing the right method depends on the problem type.
• Advanced techniques like Bayesian Optimization and Reinforcement Learning are pushing
optimization forward.

You might also like