Machine Learning
-
Neural Network: Squares
In this post we’ll take a look at a simpler example of a neural network in Python than the iris example we saw previously. First we’ll generate some sine data. The task of the ANN will be to try to output the square of the input values. First we’ll add the necessary imports, then we’ll…
-
Classifying Irises with a Neural Network
Artificial Neural Networks (ANNs) are inspired by the structure of human neurons, while not really resembling them a whole lot. Each neuron multiplies its inputs by a series of “weights”. These are then summed and passed into a non-linear function to produce the output. Neurons are then connected together, with the output of one neuron…
-
Naive Bayes
Naive Bayes is a technique that has found wide application, notably in spam filters. While Bayes’ Theorem is a theorem in mathematics, there is no “Naive Bayes’ Theorem”. Rather, the “naive” comes from the naive assumption that the probability of some value occurring in your data is independent of the probability of various other values.…
-
Decision Tree Classifiers
Decision tree classifiers work by trying to divide up your data samples based on data series values, at every stage attempting to reduce the degree to which subsets are “mixed”, as judged by Gini coefficient or Shannon entropy. For example, if you have a collection of measurements on plants, a decision tree classifier might first…
-
Using PCA and Logistic Regression to Predict Breast Cancer Diagnosis
Let’s take a look at applying PCA to a dataset that has many more than just a few data series. The Wisconsin breast cancer dataset from Scikit-learn contains thirty columns of data. The data apparently concerns the digitally-detected shape of the nuclei of cells in breast tumour biopsies. We are also told whether the diagnosis…
-
PCA for the Iris Flower Dataset
In this post we’ll analyse the Iris Flower Dataset using principal component analysis and agglomerative clustering. We’ll use PCA both to reduce the number of data series we’re feeding to our agglomerative clustering model (potentially making clustering more efficient, although in this case we’ve only got a total of four data series so it won’t…
-
Principal Component Analysis
In this course we’ve created graphs of the well-known iris flower dataset repeatedly, but we were always faced with a frustrating choice. Even though we’ve often used all four data series in the dataset to fit models, we could only plot two data series on a plot, because plots are 2D. By using 3D plots…
-
Adding to Clusters: Nearest Neighbours
Some clustering techniques allow you to fit models to data, and you can then feed whatever data you like to the model and it will try to classify your samples. For instance, the Scikit-learn k-means model has a fit method that lets you fit the model to some data. The fit method calculates the necessary…
-
Scikit-Learn Agglomerative Clustering
In the last post we saw how to create a dendrogram to analyse the clusters naturally present in our data. Now we’ll actually cluster the iris flower dataset using agglomerative clustering. Note that, although it doesn’t make a huge difference with the iris flower dataset, we will usually need to normalise the data (e.g. with…
-
Hierarchal Clustering: Dendrograms
Hierarchal clustering involves either agglomerative clustering (where we start with every sample in its own cluster and then gradually merge them together) or else divisive clustering (the samples start in a single cluster which we gradually split up). Here we’ll examine agglomerative clustering. There are a number of possible advantages of this approach. For one…