Building Your First Neural Network with TensorFlow – Deep Learning 2


Are you interested in learning how to build neural networks, one of the most popular machine learning algorithms used in deep learning? In this article, we will walk you through the process of building a simple neural network using TensorFlow, one of the most widely used deep learning frameworks.

Neural networks are inspired by the structure and function of the human brain, and are capable of learning complex patterns and relationships in data. They have been successfully applied to a wide range of tasks, including image and speech recognition, natural language processing, and game playing.

In the previous blog post, we provided an overview of deep learning and how it is used in machine learning. We explained the role of TensorFlow in deep learning and its advantages over other frameworks. We covered the basics of neural networks and discussed the different types of neural networks used in deep learning. We also provided a brief tutorial on setting up your development environment for TensorFlow. Finally, we walked through building and training a simple neural network using TensorFlow and evaluated its performance.

https://thegeeksdiary.com/2023/03/23/introduction-to-deep-learning-with-tensorflow-deep-learning-1/

This article serves as a continuation of the previous article where we will walk readers through the process of building a more complex neural network for image classification using TensorFlow. By the end of this article, readers will have a better understanding of how to define a neural network architecture, train and evaluate the performance of the model, and make predictions on new data.

In this article, we will start with the basics of neural networks and gradually work our way up to building a simple neural network using TensorFlow. We will cover the following topics:

  • The basics of neural networks
  • The role of TensorFlow in deep learning
  • Defining a neural network architecture
  • Training a neural network with TensorFlow
  • Evaluating the performance of a neural network
  • Making predictions with a trained neural network

By the end of this article, you will have built your first neural network using TensorFlow, and gained a deeper understanding of the inner workings of neural networks. If you want to deepen your knowledge of neural networks and other related topics, we also offer a comprehensive course on machine learning that covers all the essential topics in depth.

So, let’s get started and build your first neural network with TensorFlow!

The basics of neural networks

Neural networks are a type of machine learning algorithm that are inspired by the structure and function of the human brain. They are made up of layers of interconnected nodes, also known as neurons, that process and transmit information through a network of weighted connections.

The basic building block of a neural network is the perceptron, which is a mathematical model of a biological neuron. A perceptron takes input signals, multiplies them by weights, adds a bias term, and applies an activation function to produce an output signal.

Neurons in a neural network are organized into layers, with each layer having a specific function. The input layer receives input data, while the output layer produces the final output of the network. The hidden layers in between the input and output layers perform transformations on the input data to produce more abstract representations that capture the underlying patterns in the data.

Neural networks can be trained using a process called backpropagation, which involves propagating the error between the predicted output and the actual output back through the network to update the weights and biases. This process is repeated multiple times until the network produces the desired output with high accuracy.

Neural networks have been successfully applied to a wide range of tasks, including image and speech recognition, natural language processing, and game playing. They are particularly well-suited for tasks where the input data has a complex structure or where there is a large amount of data available for training.

In the next section, we will discuss the role of TensorFlow in deep learning, and how it can be used to build and train neural networks.

The role of TensorFlow in deep learning

TensorFlow is an open-source deep learning framework developed by Google. It provides a powerful and flexible platform for building and training deep learning models, including neural networks.

TensorFlow allows you to define your neural network architecture using high-level APIs, such as Keras, or low-level APIs, such as TensorFlow’s core API. This gives you the flexibility to choose the level of abstraction that best suits your needs.

TensorFlow also provides a wide range of built-in functions and tools for common deep learning tasks, such as data preprocessing, model visualization, and hyperparameter tuning. This makes it easier to build and train complex neural networks, even if you are new to deep learning.

Another key feature of TensorFlow is its support for distributed computing. With TensorFlow, you can easily distribute your neural network across multiple machines to speed up training and inference, and scale to handle larger datasets.

TensorFlow also supports a wide range of hardware platforms, including CPUs, GPUs, and TPUs. This means you can take advantage of specialized hardware to accelerate the training and inference of your neural networks.

In summary, TensorFlow is a powerful and flexible deep learning framework that provides a wide range of tools and features for building and training neural networks. In the next section, we will demonstrate how to define and train a simple neural network using TensorFlow. For a detailed overview of Tensorflow and details about how it fits in the deep learning – please refer to the part 1 of this series.

Defining a neural network architecture

Defining a neural network architecture involves deciding on the number and types of layers, as well as the number of neurons in each layer. This is a crucial step in building a neural network, as the architecture determines the network’s ability to learn and generalize from the input data.

The simplest type of neural network is the single-layer perceptron, which has only one layer of output nodes that directly produce the final output of the network. However, most practical neural networks have multiple layers, including one or more hidden layers, to capture the underlying patterns in the input data.

There are several types of layers that can be used in a neural network, including:

  • Dense layer: A fully connected layer where each neuron is connected to every neuron in the previous and next layers.
  • Convolutional layer: Used for image recognition, where the layer applies a set of filters to the input image to extract important features.
  • Recurrent layer: Used for sequence data, where the layer processes each element in a sequence and maintains a hidden state that captures the sequence’s context.
  • Pooling layer: Used to downsample the output of a convolutional layer and reduce the dimensionality of the input.

The activation function of each neuron is also an important consideration in neural network architecture. Activation functions introduce non-linearity into the network, allowing it to learn complex patterns and relationships in the input data. Popular activation functions include:

  • ReLU (Rectified Linear Unit): f(x) = max(0,x), which returns 0 for negative inputs and the input itself for positive inputs.
  • Sigmoid: f(x) = 1 / (1 + exp(-x)), which maps inputs to a probability value between 0 and 1.
  • Tanh (Hyperbolic tangent): f(x) = (exp(x) – exp(-x)) / (exp(x) + exp(-x)), which maps inputs to a value between -1 and 1.
ReLU in Convolution Neural Networks
ReLU in Convolution Neural Networks
Basic Sigmoid Function
Basic Sigmoid Function
Hyperbolic tangent vs Sigmoid
Hyperbolic tangent vs Sigmoid

When defining a neural network architecture, it is important to balance the complexity of the network with the amount of available data and the task at hand. Overfitting, where the network becomes too complex and learns to memorize the training data instead of generalizing to new data, is a common problem in neural network architecture.

In the next section, we will demonstrate how to define a simple neural network architecture using TensorFlow’s Keras API.

Training a neural network with TensorFlow

Now that we have defined a neural network architecture using TensorFlow’s Keras API, we can train the network using the backpropagation algorithm.

As an example, let’s consider an image classification task where we want to classify images of cats and dogs. We have a dataset of 10,000 images, with 5,000 images of cats and 5,000 images of dogs. Our task is to train a neural network to classify new images as either a cat or a dog.

To train the neural network, we need to provide it with labeled training data, which consists of input images and their corresponding output labels (cats or dogs). We also need to specify a loss function, which measures the difference between the predicted output of the network and the actual output.

In TensorFlow 2.x, we can use the fit() method of the Sequential class to train the neural network. Here’s an example code snippet:

import tensorflow as tf
import matplotlib.pyplot as plt

# Load the data
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar10.load_data()

# Preprocess the data
train_images = train_images / 255.0
test_images = test_images / 255.0
raw_train_labels = train_labels
raw_test_labels = test_labels
train_labels = tf.keras.utils.to_categorical(train_labels)
test_labels = tf.keras.utils.to_categorical(test_labels)

# Define the neural network architecture
model = tf.keras.Sequential([
  tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(32, 32, 3)),
  tf.keras.layers.MaxPooling2D((2,2)),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

# Compile the model
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
# Train the model
model.fit(train_images, train_labels, epochs=20, validation_data=(test_images, test_labels))
label_names = { 0: 'airplane',1:'automobile',2:'bird',3:'cat',4:'deer',5:'dog',6:'frog',7:'horse',8:'ship',9:'truck'}
# Show an example image and its associated class label from the test data
for image_index in range(12):
    plt.imshow(test_images[image_index])
    true_label_string = label_names[raw_test_labels[image_index][0]]
    plt.title(f"True label: [Name: {true_label_string}] - [Numeric: {str(raw_test_labels[image_index][0])}]",fontsize=10)
    predictions = model.predict(test_images)
    predicted_labels = tf.argmax(predictions, axis=1)
    predicted_label_string = label_names[predicted_labels[image_index].numpy()]
    plt.suptitle(f"Predicted label: [Name: {predicted_label_string}] - [Numeric: {str(predicted_labels[image_index].numpy())}]",fontsize=10, y=1)
    plt.show()
    # Make predictions on the test data
    
    # Show the predicted label for the example image
    print(f"Predicted label: [Name: {predicted_label_string}] - [Numeric: {str(predicted_labels[image_index].numpy())}]")

In this example, we first load and preprocess the training and testing data. We then define a neural network architecture that consists of a convolutional layer, a max pooling layer, a flatten layer, and two fully connected (dense) layers. The softmax activation function is used in the output layer, which produces a probability distribution over the two classes (cats and dogs).

We then compile the model, specifying the optimizer (Adam), the loss function (categorical cross-entropy), and the evaluation metric (accuracy). Finally, we train the model using the fit() method, specifying the number of epochs and the validation data.

During training, the model updates its weights and biases using the backpropagation algorithm, which minimizes the loss function. After training, we can evaluate the performance of the model using the testing data, and make predictions on new, unseen images.

In the next section, we will discuss how to evaluate the performance of a trained neural network using various metrics.

Understanding the code

import tensorflow as tf
import matplotlib.pyplot as plt

# Load the data
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar10.load_data()

This code imports the TensorFlow and Matplotlib libraries, and then loads the CIFAR-10 dataset using the tf.keras.datasets.cifar10.load_data() function. The CIFAR-10 dataset consists of 60,000 32×32 color images in 10 classes, with 6,000 images per class. The dataset is divided into 50,000 training images and 10,000 testing images.

# Preprocess the data
train_images = train_images / 255.0
test_images = test_images / 255.0
raw_train_labels = train_labels
raw_test_labels = test_labels
train_labels = tf.keras.utils.to_categorical(train_labels)
test_labels = tf.keras.utils.to_categorical(test_labels)

his code preprocesses the training and testing data by normalizing the pixel values to the range [0, 1], and converting the label arrays to one-hot encoded vectors using the tf.keras.utils.to_categorical() function.

# Define the neural network architecture
model = tf.keras.Sequential([
  tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(32, 32, 3)),
  tf.keras.layers.MaxPooling2D((2,2)),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

# Compile the model
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

This code defines a neural network architecture using the Keras API of TensorFlow 2.x. The architecture consists of a convolutional layer with 32 filters of size (3, 3), a max pooling layer with pool size (2, 2), a flatten layer, and two fully connected (dense) layers. The final dense layer uses the softmax activation function to produce a probability distribution over the 10 classes.

The model is then compiled using the compile() method of the Sequential class, specifying the optimizer (Adam), the loss function (categorical cross-entropy), and the evaluation metric (accuracy).

# Train the model
model.fit(train_images, train_labels, epochs=20, validation_data=(test_images, test_labels))

This code trains the neural network using the fit() method of the Sequential class, specifying the number of epochs (20) and the validation data.

label_names = { 0: 'airplane',1:'automobile',2:'bird',3:'cat',4:'deer',5:'dog',6:'frog',7:'horse',8:'ship',9:'truck'}

# Show an example image and its associated class label from the test data
for image_index in range(12):
    plt.imshow(test_images[image_index])
    true_label_string = label_names[raw_test_labels[image_index][0]]
    plt.title(f"True label: [Name: {true_label_string}] - [Numeric: {str(raw_test_labels[image_index][0])}]",fontsize=10)
    predictions = model.predict(test_images)
    predicted_labels = tf.argmax(predictions, axis=1)
    predicted_label_string = label_names[predicted_labels[image_index].numpy()]
    plt.suptitle(f"Predicted label: [Name: {predicted_label_string}] - [Numeric: {str(predicted_labels[image_index].numpy())}]",fontsize=10, y=1)

    plt.show()
    # Make predictions on the test data
    predictions = model.predict(test_images)
    # Show the predicted label for the example image
    predicted_label_string = label_names[predicted_labels[image_index].numpy()]
    print(f"Predicted label: [Name: {predicted_label_string}] - [Numeric: {str(predicted_labels[image_index].numpy())}]")

This code defines a dictionary label_names that maps the class index to the corresponding class name.

Then, it loops through 12 images from the test data and shows each image along with its true class label and predicted class label. It first shows the true label by looking up the label name in the label_names dictionary. Then, it makes predictions using the trained model and gets the predicted label using the argmax() function of TensorFlow. Finally, it shows the predicted label by looking up the label name in the label_names dictionary and prints the predicted label as a string.

Note that the plt.show() function is used to display each image and its associated labels.

Overall, this code loads the CIFAR-10 dataset, preprocesses the data, defines a convolutional neural network architecture, trains the model, and makes predictions on the test data. It also shows example images from the test data along with their associated true and predicted class labels.

Evaluating the performance of our neural network

After training a neural network, it’s important to evaluate its performance on unseen data. In this section, we will evaluate the performance of our trained neural network on the test data using several metrics for multi-class classification.

# Evaluate the model on the test data
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)

print('Test loss:', test_loss)
print('Test accuracy:', test_acc)

This code evaluates the model on the test data using the evaluate() method of the Sequential class. It calculates the test loss and test accuracy, and prints these metrics to the console.

from sklearn.metrics import confusion_matrix, classification_report

# Make predictions on the test data
predictions = model.predict(test_images)

# Convert one-hot encoded vectors to class labels
predicted_labels = tf.argmax(predictions, axis=1)
true_labels = tf.argmax(test_labels, axis=1)

# Print the classification report
print('Classification Report:')
print(classification_report(true_labels, predicted_labels))

# Print the confusion matrix
print('Confusion Matrix:')
print(confusion_matrix(true_labels, predicted_labels))

This code uses the predict() method of the Sequential class to make predictions on the test data, and then converts the predicted and true labels from one-hot encoded vectors to class labels using the argmax() function of TensorFlow.

Next, it calculates and prints the classification report and confusion matrix using the classification_report() and confusion_matrix() functions from the scikit-learn library. The classification report shows the precision, recall, and F1 score for each class, as well as the accuracy and macro/micro averages. The confusion matrix shows the number of true positives, false positives, true negatives, and false negatives for each class.

This code evaluates the performance of our trained neural network on the test data using several metrics for multi-class classification, and provides insights into the model’s strengths and weaknesses.

The entire code for this article is available on my github repo.

Making predictions with the trained neural network

The below code is using the same neural network architecture that we trained on the CIFAR-10 dataset, but it is making predictions on new images that were not part of the training or test data. The image_files dictionary contains the filenames and labels for each of the 10 images that we want to make predictions on.

import numpy as np
from PIL import Image
image_files = { 
    0:('airoplane','../data/airoplane.jpg'),
    1:('automobile','../data/automobile.jpg'),
    2:('bird','../data/bird.jpg'),
    3:('cat','../data/cat.jpg'),
    4:('deer','../data/deer.jpg'),
    5:('dog','../data/dog.jpg'),
    6:('frog','../data/frog.jpg'),
    7:('horse','../data/horse.jpg'),
    8:('ship','../data/ship.jpg'),
    9:('truck','../data/truck.jpg')
}
for index in range(10):
    name, image_path = image_files[index]
    image = Image.open(image_path)
    image = Image.open(image_path)
    image = image.resize((32, 32))
    image = np.array(image)
    image = image / 255.0
    image_reshaped = image.reshape(1, 32, 32, 3)
    # Make a prediction on the new image
    prediction = model.predict(image_reshaped)

    # Convert the prediction to a class label
    predicted_label = np.argmax(prediction, axis=1)[0]
    # Define the label names
    label_names = {0: 'airplane', 1: 'automobile', 2: 'bird', 3: 'cat', 4: 'deer', 5: 'dog', 6: 'frog', 7: 'horse', 8: 'ship', 9: 'truck'}
    # Print the predicted label
    predicted_label_name = label_names[predicted_label]
    plt.imshow(image)
    plt.title(f"[True Label: {name}] - [Predicted: {predicted_label_name}]",fontsize=10)
    plt.show()

The code first opens each image using the PIL library and resizes it to 32×32 pixels. Then, it preprocesses the image in the same way as the training and test data by normalizing the pixel values to be between 0 and 1 and reshaping the image to have a batch size of 1.

Next, it uses the predict() method of the trained model to make a prediction on the new image. The argmax() function of NumPy is then used to convert the prediction to a class label.

Finally, the code displays each image along with its true label and predicted label using the imshow() function of Matplotlib.

However, it’s important to note that the model might not perform well on these unseen images, since they are not part of the training or test data. The images in the CIFAR-10 dataset were carefully selected to be representative of the 10 different classes, but these new images might not be representative or might be different in some way from the training data.

In the next article of this series, we will discuss advanced techniques for model optimization that can improve the performance of the neural network on unseen data.

Conclusion

In this article, we learned the basics of neural networks, the role of TensorFlow in deep learning, and how to define a neural network architecture using TensorFlow. We also saw how to train a neural network on the CIFAR-10 dataset and evaluate its performance using various metrics. Finally, we discussed how to make predictions with a trained neural network on new images.

How to build an image classifier using Tensorflow

StepDescription
Load the dataLoad the CIFAR-10 dataset using the tf.keras.datasets.cifar10.load_data() function
Preprocess the dataNormalize the pixel values of the images by dividing them by 255.0 and reshape the labels to be one-hot encoded using the tf.keras.utils.to_categorical() function
Define the neural network architectureDefine a sequential model using the tf.keras.Sequential() function and add layers to the model using the tf.keras.layers module
Compile the modelCompile the model using the compile() method and specify the loss function, optimizer, and evaluation metric
Train the modelTrain the model on the training data using the fit() method and specifying the number of epochs and the validation data
Evaluate the modelEvaluate the performance of the model on the test data using the evaluate() method and compute various metrics such as accuracy, precision, recall, F1 score, and the area under the ROC curve
Make predictions on new imagesLoad new images, preprocess them, and use the predict() method of the trained model to make
HowTo: Build an Image Classifier Using Tensorflow

Overall, we covered the following sections in this article:

SectionDescription
IntroductionAn overview of the article and the importance of building neural networks with TensorFlow
The basics of neural networksAn introduction to neural networks and their components
The role of TensorFlow in deep learningA discussion on how TensorFlow is used in deep learning
Defining a neural network architectureA detailed explanation of neural network architecture and how to define it using TensorFlow
Training a neural network with TensorFlowAn example of how to train a neural network on the CIFAR-10 dataset using TensorFlow
Evaluating the performance of a neural networkA discussion on how to evaluate the performance of a neural network using various metrics
Making predictions with a trained neural networkAn example of how to make predictions with a trained neural network on new images
Summary of What We Learnt Today

In the next article of this series, we will discuss advanced techniques for optimizing neural networks and improving their performance on unseen data.

Other posts

  • Object Extraction using Image Segmentation: A Comprehensive Tutorial with Detectron2 and Mask2Former

    Object Extraction using Image Segmentation: A Comprehensive Tutorial with Detectron2 and Mask2Former

    Discover how to perform object extraction using image segmentation with Detectron2 and Mask2Former in our step-by-step tutorial. Learn to set up the environment, configure the model, and visualize segmentation results, extracting objects from images with ease. Boost your computer vision skills and optimize your image processing projects with this comprehensive guide.

  • Building Your First Neural Network with TensorFlow – Deep Learning 2

    Building Your First Neural Network with TensorFlow – Deep Learning 2

    Neural networks are a fundamental concept in deep learning and are used for a wide range of applications such as image and speech recognition, natural language processing, and much more. In this article, we will walk you through the process of building your first neural network using TensorFlow, a popular open-source machine learning library. We'll…

  • Introduction to Deep Learning with TensorFlow – Deep Learning 1

    Introduction to Deep Learning with TensorFlow – Deep Learning 1

    In this article, we provide an introduction to deep learning with TensorFlow. We cover what deep learning is, what it can do, why TensorFlow is a great choice for deep learning, and an overview of TensorFlow itself. We also explore the different types of neural networks used in deep learning, and demonstrate how to build…

  • How To: Set Up PyTorch with GPU Support on Windows 11 – A Comprehensive Guide

    How To: Set Up PyTorch with GPU Support on Windows 11 – A Comprehensive Guide

    Introduction Hello tech enthusiasts! Pradeep here, your trusted source for all things related to machine learning, deep learning, and Python. As you know, I’ve previously covered setting up TensorFlow on Windows. Today, I’m excited to bring you a detailed guide on setting up another popular deep learning framework, PyTorch, with GPU support on Windows 11.…

  • Solving a Complex Logistics Optimization Problem using the Pulp Library in Python – Part 4

    Solving a Complex Logistics Optimization Problem using the Pulp Library in Python – Part 4

    In this article, we demonstrate how to solve a logistics optimization problem using the Pulp library in Python. By defining the variables, objective function, and constraints, and using the solve method to find the optimal solution, we are able to minimize the total cost of transportation while satisfying the constraints. This article concludes the multi-part…

  • Linear Programming in Python using PuLP โ€“ Part 3: Optimizing Investment Portfolios with Multi-Objective Optimization

    Linear Programming in Python using PuLP โ€“ Part 3: Optimizing Investment Portfolios with Multi-Objective Optimization

    In this article, we used the Pulp library in Python to solve a linear programming problem to find the optimal investment portfolio. We defined variables, added constraints, defined objectives, and solved the problem to find the optimal solution that balances the trade-off between maximizing returns and minimizing risk. The code was concise and easy to…

  • Linear Programming in Python using Pulp – Part 2

    Linear Programming in Python using Pulp – Part 2

    In this article, we delve deeper into linear programming and explore how to solve a multi-objective optimization problem using the Pulp library in Python. We present a problem in which a nutritionist must find the optimal meal plan for a patient suffering from anemia, balancing the intake of Vitamin B12 and fat. We demonstrate how…

  • Linear Programming in Python using PuLP – Part 1

    Linear Programming in Python using PuLP – Part 1

    Linear programming is an optimization technique used to find the best outcomes for a given problem. This technique relies on a set of constructs which are all expressed using a system of linear equations. It is important to understand that you should be able to express your objective as a linear equation dependent on an…

  • How To: Setup Tensorflow With GPU Support using Docker

    How To: Setup Tensorflow With GPU Support using Docker

    Previously I published a guide for setting up tensorflow in an anconda environment with GPU support. A lot of people liked it and I have been working with this environment myself for more than a year now. I am happy with the results however the process is a bit involved and requires quite a bit…

  • How To: Setup Tensorflow With GPU Support in Windows 11

    How To: Setup Tensorflow With GPU Support in Windows 11

    It's been just 2 days since Windows 11 came out and I am already setting up my system for the ultimate machine learning environment. Today we are going to setup a new anaconda environment with tensorflow 2.5 with GPU support using NVIDIA CUDA 11.4 and CUDNN 8.2.4 along with Python 3.8. This is going to…

  • Tools of The Trade – II

    Tools of The Trade – II

    In continuation of my previous post today I will talk about the website tanooja.com. I did this project on request of my wife because she wanted to pursue blogging and didn't want to go through the ordeal needed to write, publish and manage SEO using most of the prominent blogging platforms like WordPress, Joomla, Drupal…

  • Tools of The Trade – I

    Tools of The Trade – I

    In this post I will share a few tools and technologies that I am using to run a couple of blazing fast websites using latest modern tools and technologies. The caveat here is that I don't pay any infrastructure/hosting costs for any of these websites and they can scale infinitely in terms of supported users…

  • Building Lizzie – IV

    Building Lizzie – IV

    Another post about Lizzie. I started off with a Raspberry Pi 3 to build a personal assistant for my car and I have come a long way both in terms of the concept and the functionality. Most importantly I have formalized the application flow and also extended the scope from one device to almost all…

  • OBD-II with Raspberry Pi3

    OBD-II with Raspberry Pi3

    I am writing this article in response to a question posted on my YouTube channel. Here I would be talking about communicating to an OBD-II device (ELM327 chip with Bluetooth) hooked into your car’s OBD-II port. The OS I am using is Windows 10 IoT core. This information is important because it makes a difference…

  • Building Lizzie – III

    Building Lizzie – III

    As mentioned in previous article today I would be talking about OBD-II integration in Lizzie using a Bluetooth serial communication with an ELM327 adapter that fits on a OBD-II port in your car. OBD stands for On Board Diagnostics which is connected to the ECU (Engine Control Unit) and provides a ton of information (both…

  • Building Lizzie – II

    Building Lizzie – II

    In the previous post I described my experiments around building an intelligent artificial personal assistant – Lizzie. The pseudo intelligent agents available today around us (Siri, Cortana or Google Next) are all great feats of engineering given the fact that they reside on small devices like mobile phones and are able to do powerful things…

  • Building Lizzie – I

    Building Lizzie – I

    Recently I have been busy building a personal assistant that I would be fitting in my car. Currently I am in experimentation mode and I am experimenting with speech capabilities. I would start with a description of my journey so far. First let me show off a little bit with these videos that I created…

%d bloggers like this: