Notes
![]() ![]() Notes - notes.io |
1. LINEAR REGRESSION:
from sklearn.linear_model import LinearRegression
# create a LinearRegression object
regressor = LinearRegression()
# define the input (X) and output (y) variables
X = [[1], [2], [3], [4], [5]]
y = [2, 4, 5, 4, 5]
# train the model using the input and output data
regressor.fit(X, y)
# print the model coefficients
print("Intercept:", regressor.intercept_)
print("Coefficient:", regressor.coef_)
# make a prediction for a new input value
new_X = [[6]]
predicted_y = regressor.predict(new_X)
print("Predicted output:", predicted_y)
2. Logistic Rgression
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# create a classification dataset
X, y = make_classification(n_samples=1000, n_features=4, random_state=42)
# split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# create a LogisticRegression object
classifier = LogisticRegression()
# train the model using the training data
classifier.fit(X_train, y_train)
# evaluate the model accuracy on the testing data
accuracy = classifier.score(X_test, y_test)
print("Accuracy:", accuracy)
# make a prediction for a new input value
new_X = [[0.5, 0.5, 0.5, 0.5]]
predicted_y = classifier.predict(new_X)
print("Predicted output:", predicted_y)
3. KNN
from sklearn.neighbors import KNeighborsClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# create a classification dataset
X, y = make_classification(n_samples=1000, n_features=4, random_state=42)
# split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# create a KNeighborsClassifier object with k=3
classifier = KNeighborsClassifier(n_neighbors=3)
# train the model using the training data
classifier.fit(X_train, y_train)
# evaluate the model accuracy on the testing data
accuracy = classifier.score(X_test, y_test)
print("Accuracy:", accuracy)
# make a prediction for a new input value
new_X = [[0.5, 0.5, 0.5, 0.5]]
predicted_y = classifier.predict(new_X)
print("Predicted output:", predicted_y)
4.SVM
from sklearn.svm import SVC
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# create a classification dataset
X, y = make_classification(n_samples=1000, n_features=4, random_state=42)
# split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# create an SVM classifier with a linear kernel
classifier = SVC(kernel='linear')
# train the model using the training data
classifier.fit(X_train, y_train)
# evaluate the model accuracy on the testing data
accuracy = classifier.score(X_test, y_test)
print("Accuracy:", accuracy)
# make a prediction for a new input value
new_X = [[0.5, 0.5, 0.5, 0.5]]
predicted_y = classifier.predict(new_X)
print("Predicted output:", predicted_y)
5. K-Means
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
# create a random dataset with 500 samples and 4 clusters
X, y = make_blobs(n_samples=500, centers=4, random_state=42)
# create a KMeans object with 4 clusters
kmeans = KMeans(n_clusters=4)
# fit the KMeans model to the data
kmeans.fit(X)
# get the cluster labels and cluster centers
labels = kmeans.labels_
centers = kmeans.cluster_centers_
# plot the data points with different colors for each cluster
plt.scatter(X[:, 0], X[:, 1], c=labels)
# plot the cluster centers as red stars
plt.scatter(centers[:, 0], centers[:, 1], marker='*', c='red', s=200)
plt.show()
6. Navie Bayes
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
# load the iris dataset
iris = load_iris()
# split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)
# create a Gaussian Naive Bayes classifier
gnb = GaussianNB()
# train the classifier using the training data
gnb.fit(X_train, y_train)
# make predictions on the testing data
y_pred = gnb.predict(X_test)
# calculate the accuracy of the classifier
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
7. Binary Classification using Logistc Regression
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_auc_score
# load the breast cancer dataset
data = load_breast_cancer()
# split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.2, random_state=42)
# create a logistic regression classifier
lr = LogisticRegression()
# train the classifier using the training data
lr.fit(X_train, y_train)
# make predictions on the testing data
y_pred = lr.predict(X_test)
# calculate the accuracy, precision, recall, F1 score, and AUC-ROC of the classifier
accuracy = accuracy_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
auc_roc = roc_auc_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Precision:", precision)
print("Recall:", recall)
print("F1 score:", f1)
print("AUC-ROC score:", auc_roc)
8. Image-Processing
import tensorflow
import keras
import os
import glob
from skimage import io
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
dataset_path = '/content/Animals'
class_names = ['Cheetah', 'Jaguar', 'Leopard', 'Lion','Tiger']
# apply glob module to retrieve files/pathnames
animal_path = os.path.join(dataset_path, class_names[1], '*')
animal_path = glob.glob(animal_path)
image = io.imread(animal_path[4])
# plotting the original image
i, (im1) = plt.subplots(1)
i.set_figwidth(15)
im1.imshow(image)
i, (im1, im2, im3, im4) = plt.subplots(1, 4, sharey=True)
i.set_figwidth(20)
im1.imshow(image) #Original image
im2.imshow(image[:, : , 0]) #Red
im3.imshow(image[:, : , 1]) #Green
im4.imshow(image[:, : , 2]) #Blue
i.suptitle('Original & RGB image channels')
#GREYSCALE CONVERSION
import skimage
gray_image = skimage.color.rgb2gray(image)
plt.imshow(gray_image, cmap = 'gray')
#normalisation
norm_image = (gray_image - np.min(gray_image)) / (np.max(gray_image) - np.min(gray_image))
plt.imshow(norm_image)
#DATA-AGUMENTATION
#1.ShiftingHorizontally
from numpy import expand_dims
from keras_preprocessing.image import load_img
from keras_preprocessing.image import img_to_array
from keras.preprocessing.image import ImageDataGenerator
# convert to numpy array
data = img_to_array(image)
# expand dimension to one sample
samples = expand_dims(image, 0)
# create image data augmentation generator
datagen = ImageDataGenerator(width_shift_range=[-200,200])
# create an iterator
it = datagen.flow(samples, batch_size=1)
fig, im = plt.subplots(nrows=1, ncols=3, figsize=(15,15))
# generate batch of images
for i in range(3):
# convert to unsigned integers
image = next(it)[0].astype('uint8')
# plot image
im[i].imshow(image)
#Flipping
datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True)
# create an iterator
it = datagen.flow(samples, batch_size=1)
fig, im = plt.subplots(nrows=1, ncols=3, figsize=(15,15))
# generate batch of images
for i in range(3):
# convert to unsigned integers
image = next(it)[0].astype('uint8')
# plot image
im[i].imshow(image)
#Rotaion
datagen = ImageDataGenerator(rotation_range=20, fill_mode='nearest')
# create an iterator
it = datagen.flow(samples, batch_size=1)
fig, im = plt.subplots(nrows=1, ncols=3, figsize=(15,15))
# generate batch of images
for i in range(3):
# convert to unsigned integers
image = next(it)[0].astype('uint8')
# plot image
im[i].imshow(image)
#Changing the Brightness
datagen = ImageDataGenerator(brightness_range=[0.5,2.0])
it = datagen.flow(samples, batch_size=1)
fig, im = plt.subplots(nrows=1, ncols=3, figsize=(15,15))
# generate batch of images
for i in range(3):
# convert to unsigned integers
image = next(it)[0].astype('uint8')
# plot image
im[i].imshow(image)
![]() |
Notes is a web-based application for online taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000+ notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 14 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team