Ceiling Fan Update

When I originally worked with my interior designer on the master bedroom, one item that I neglected to finish was upgrading the old ceiling fan to something that better fit the design style. I finally got around to that recently, and installed a new wifi-enabled smart ceiling fan that really nicely integrates with Amazon Alexa, and also looks excellent. Wiring the fan was an adventure, since I had to first wire in the wifi transceiver.

Once the wiring was done, I installed the rest of the fan.

Once the fan was connected to Alexa, along with the table lamps on either side of the bed, the look of the bedroom was transformed!

Ring Security Camera

When all of the Lake Wisconsin flooding happened last year, I installed a security camera system at the lake house that would allow me to remotely keep tabs on the dock, boat lift, and boat. I went with an inexpensive wired security system, which worked well but broke over the winter when a tree branch hit the wiring. This spring, I wanted to find a simpler solution. After investigating a lot of options, I decided to go with a WiFi-based Camera from Ring. The first step was to install a WiFi range extender in the house to beam WiFi down to the dock. I used a Linksys range extender that had enough power to get the job done:

From there hooking up the Ring Camera was easy, and the mobile app UI is slick!

Nest

For a long time, I’ve wanted to replace the thermostat that controls HVAC for the top two floors of the house. My wife hated the old one, and it was clunky with no ability to control via my phone. I took advantage of a Black Friday deal for a Nest Learning Thermostat at the Home Depot. It was a pretty easy install process, which first involves removing the old thermostat and labeling the wires.

From there, all I had to do was wire up the Nest and calibrate and configure it.

After that, connecting it to my phone via the Nest App and subsequently Alexa was very straightforward.

Boat Lift Fiasco

Several weeks ago when I was in Dallas, I received word from my realtor that the boat lift (and the boat) at the lake house was in danger of being swept away due to flooding on Lake Wisconsin and the Wisconsin River.

After following up for more information, I was eventually able to get ahold of Manke Enterprises, one of the two Lake Wisconsin dock companies. They agreed to go take a look that night, and what they found wasn’t great.

Manke did their best to secure the lift for the night, and we went to bed wondering if everything would still be there in the morning.

Luckily, everything held fast overnight despite record high water levels for late June on Lake Wisconsin. Manke was able to get the boat and the lift out, and several weeks later (plus about $5,000 out of pocket) we were able to get the lift and the boat back in the water. Before that happened though, we also had to have the dock rescued a week after the boat lift, as the water levels rose even higher.

Thankfully, Deano Docks was able to get it reset in a couple of days and the boat lift was able to go back in.

Learning from this experience, I took several steps to prevent this from happening again. First, I learned how to read the NOAA water level charts, to predict when flood events might happen.

Second, I installed security cameras that I can access remotely, to be able to personally verify that everything is where it should be.

The cameras were pretty easy to install, and it took half a day to run cables along the stairs that go down to the dock and bury them underground from the top of the stairs to the house. By using a wired system, I don’t have to worry about WiFi, batteries, or any other potential points of failure.

Hopefully the rest of the summer will be as drama free as possible!

SmartThings Smoke Alarm

One of the many home automation technologies that I use in my houses is Samsung SmartThings. It supports a wide array of sensors, integrated well with Alexa and Arlo, and provides a slick phone UI with the ability to manage multiple homes.

For a long time, I’ve used the SmartThings Leak Sensor to monitor for water leaks, which is my biggest concern during the winter, especially at the lake house. The leak sensor also monitors temperature over time, which is a big bonus given the concern over freezing pipes.

My second biggest concern would be fire, especially when the heat is running at the lake house when I am not there for extended periods of time. While the temperature sensors would theoretically pick up the heat from fire, that’s obviously no substitute for a real smoke and carbon monoxide detector. Therefore, I wanted to install a smoke and CO detector that I could monitor remotely through SmartThings. I settled on one from First Alert, which was recommended by Samsung, battery powered, and pretty inexpensive.

It was fairly easy to install, although getting it to pair to the SmartThings hub wasn’t as straightforward as it might seem. The device leaves the factory in a state that is not ready to pair and you have to clear some ‘state’ that is left on it. Below is the procedure that finally worked for me:

(1) Exclude the Smoke Detector. To do this in the SmartThings mobile app:

  • Tap the More menu.
  • Tap the Settings (gear) icon.
  • Scroll down tot the ‘Hubs” seciont, Tap the Hub.
  • Tap Z-Wave Utilities.
  • Tap General Device Exclusion.
  • When prompted, do the following on the smoke detector:
  • Slide out the smoke detector’s battery tray.
  • Remove and re-insert the batteries (checking the correct orientation).
  • Press and hold the detector’s test button while re-inserting the battery tray.
  • Wait for the smoke detector to beep (about 2 seconds).
  • Release the button.

(2) Now you can add the smoke detector your SmartThings hub. Start in the SmartThings Mobile App:

  • Tap My Home.
  • Under Things, tap Add a Thing at the bottom of your Things list.
  • The app will say Looking for Devices.
  • While the Hub searches, press and hold the detector’s test button as you slide the battery tray back into the device.
  • Wait about 2 seconds for the smoke detector to beep, and then release the button. The smoke detector will then beep again.
  • When the device is discovered, it will be listed at the top of the screen.
  • Tap the device to rename it and tap Done.
  • When finished, tap Save.
  • Tap Ok to confirm.

After getting the device set up, I was able to test the integration with the hub by pressing the test button the smoke detector. The event will register under the device at My Home> “Smoke Alarm” > Recently. You will see the event “Was Tested” followed by “Was Cleared”. However, if you’ve set up any automation under the Smart Home Monitor (Ex. send a text when smoke alarm goes off), this will NOT trigger when you press the test button. This would be a nice feature for the manufacturer to add.

Now that the smoke alarm is setup, I can effectively monitor both of my houses for water leaks, freezing, fire, and carbon monoxide through SmartThings!

Choosing the Right Smart Switch

One of the primary goals of my initial home automation installation was to install smart light switches that would allow me to turn lights on and off with my voice. While it is cool to be able to walk into a room and say “Alexa, turn on the living room,” the real power comes when you can setup scenes that work with multiple lights at once. I can say “Alexa, turn on the downstairs” and turn on the normal set of lights that I like to have turned on for the middle level of our house. I can also say, “Alexa, turn on the whole downstairs” and all the lights will be turned on downstairs. Finally, I can say “Alexa, turn off the whole house” when I walk out the door and Alexa will turn off any lights in the house that happen to be on:

img_9600

Virtually all mainstream wifi smart switches work with the Echo platform, so compatibility was not an issue. With that in mind, my first inclination was to use the WeMo platform, since that was one of the most highly marketed platforms that I had seen in stores:

372-jpgcq5dam-web-372-372

WeMo makes a small number of products, and their two mainline products that I chose to work with are the WeMo Switch ($34.99 per switch) and the WeMo Light Switch ($49.99 per switch). The Switch is a smart outlet that plugs into an existing outlet, and the Light Switch is an actual light switch that installs into the wall in place of a traditional switch. The Switch works as advertised. It is a little clunky to pair, since it communicates directly with the wifi router but has to be connected to the router initially via the WeMo app. Once that is done, however, it gets its job done and allows you to turn lights on and off that are powered by an outlet. The setup through the WeMo app and the Alexa app are pretty straightforward:

img_9599

img_9601

The problems came when working with the WeMo Light Switches. Like the other Switches, they are somewhat clunky to pair but do function, but they have a critical achilles heel. If you read the specifications, it says this:

Replaces single pole switch. Not compatible with 3 way (multi location control) switches.

This means that if you have a light with multiple switches that turn it on and off (which is extremely common), the WeMo light switch will not work for you. This sent me back to the drawing board. I landed on the Caséta Wireless platform from Lutron ($54.99 per switch). These support three way switches, and also come with their own base station that connects to the wifi router via an ethernet cable. This is a small risk, since you are dependent on that base station continuing to function or the switches become useless, but I mitigated that by buying a backup base station in case Lutron stops making the platform in the future. The benefit of the base station is that pairing the switches to wifi is much smoother. The switches are also higher quality, and come with dimmer capabilities:

lutron-caseta-review

The Lutron app is fine, and they integrate with the Alexa app just like WeMo:

img_9602

Most importantly, they support wiring for three way functionality:

screen-shot-2017-01-26-at-9-29-27-pm

I would highly recommend the Caséta platform, and have installed 16 switches throughout my house to complement the six Echoes that give me complete smart lighting voice activation in every room of my house.

Custom Alexa Skill for Tracking Car Use

Over the last several weeks, I have been adding various home automation technologies to the house: Arlo for home security, Wemo and Lutron Caseta for automated lighting, and Amazon Echo/Alexa for voice control. Out of the box, Alexa’s integration with other smart home technologies is pretty good. It doesn’t take any custom work to be able to use your voice to turn lights on and off, and integrating Alexa with Arlo was fairly straightforward using the IFTTT service, which allows for basic “if this, then that” style applets that can be triggered via voice through Alexa.

However, in order to build a true smart home, I wanted to be able to write my own applications that could be executed within my IOT ecosystem that would serve needs very custom to me. A few of the initial ideas I had were:

  • Wine Cellar Integration: I want to be able to ask Alexa if we have a particular bottle in stock, and if so how many bottles we have. This would require integrating an Alexa skill with Vinocell, a wine cellar management application that I use.
  • Madison Restaurant Ideas: My wife and I frequently are indecisive about where to eat dinner. I want to be able to ask Alexa for ideas, tailored to our specific preferences and location, beyond what an app like Urban Spoon could provide.
  • Car Tracking: As a sports car collector, I have many cars. I often find myself wondering, when was the last time I actually drove the Porsche? How often in the last month or two have I driven the Porsche?

This post will focus on the last idea. It struck me as a fairly good first Alexa project, since it wouldn’t involve integrations with any third party APIs, just APIs that I’d have to develop to store the requisite data.

Requirements

I typically interact with Alexa every morning on my way out the door. I ask “what’s new?” to get my daily news briefing, ask what is on my calendar, ask about the weather, and ask about my commute. The goal for the skill is to be able to say, “Alexa, tell Hardin Home that I’m driving the Mercedes today.” Alexa will record a timestamp that I drove the Mercedes that day in a database, and retrieve that information when I say “Alexa, ask Hardin Home when I last drove the Mercedes” in the form of a sentence like, “You last drove the Mercedes six days ago.”

Architecture

This project would involve several components: An Alexa skill, which would call a function on AWS Lambda (written in node.js), which would call a series of very PHP APIs hosted on an Ubuntu/Apache EC2 instance with a MySQL database for storing the data about the cars. The EC2 instance would be placed inside of a VPC, with an AWS security group limiting access on port 80 solely to the VPC. This allows me to grant the Lambda function access to the VPC, so that it (and only it) can interact with my API, preventing me from having to implement a lot of additional security measures like I’d have to if the EC2 instance were open to the outside world.

screen-shot-2016-12-14-at-11-20-13-pm

API Implementation

To implement the API, I created a t2.small EC2 instance, and assigned it an elastic IP. I setup a security group that opened all ports within the VPC, and then granted access to my home IP on port 80 and 22 in order to allow me to connect to the server and deploy code, as well as to test web services from a browser:

screen-shot-2016-12-14-at-11-34-37-pmOnce this was done, I SSH’ed into my server and installed a basic LAMP stack:

sudo apt-get update
sudo apt-get install lamp-server^

After this, I installed phpMyAdmin, and created a database called hardin_home. I added two simple tables, cars, and cars_driven. The cars table holds information about each car (which will be used later in a sample conversational query with Alexa), and the cars_driven holds a list of timestamps for when each car was driven:

screen-shot-2016-12-14-at-11-38-41-pm

I implemented three quick-and-dirty PHP services that can be called by the Lambda implementation. An obvious refactor of this implementation would be to implement the API via a proper framework or microframework, but in this case I wanted to be able to crank out the API calls in five minutes, so they are just manual PHP. They are:

drive.php

$con = mysqli_connect("localhost", "XXXXX", "XXXXX", "hardin_home");
$result = $con->query("insert into cars_driven (car) values ('" . $con->real_escape_string($_REQUEST['car']) . "')");

mysqli_close($con);

last_driven.php

$con = mysqli_connect("localhost", "XXXXX", "XXXXX", "hardin_home");

$result = $con->query("select * from cars_driven where car = '" . $con->real_escape_string($_REQUEST['car']) . "' order by id desc limit 1");
$row = $result->fetch_assoc();

$timestamp = $row['driven_timestamp'];

if (date('Ymd') == date('Ymd', strtotime($timestamp)))
{
    echo "You last drove the " . $_REQUEST['car'] . " today.";
}
else
{
    echo "You last drove the " . $_REQUEST['car'] . " " . humanTiming(strtotime($timestamp)) . " ago, on " . date('F j, Y', strtotime($timestamp)) . ".";
}

mysqli_close($con);

more_info.php

$con = mysqli_connect("localhost", "XXXXX", "XXXXX", "hardin_home");

$result = $con->query("select * from cars where name = '" . $con->real_escape_string($_REQUEST['car']) . "'");
$row = $result->fetch_assoc();

echo $row['description'];

mysqli_close($con);

Lambda Implementation

According to Amazon, “AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.” Currently, Lambda supports node.js, Python, and Java. For this implementation, I selected node.js. First, I needed to configure a Lambda application to use node.js, and assign it a role to access the VPC that I setup earlier:

screen-shot-2016-12-14-at-11-48-50-pm

Under advanced settings, I gave it explicit access to my VPC (and thus my PHP services on my EC2 instance):

screen-shot-2016-12-14-at-11-49-00-pm

Prior to writing and deploying my node.js application package to Lambda, I needed to setup how the Lambda function would be triggered. Obviously for this implementation, the trigger would be an Alexa call:

screen-shot-2016-12-14-at-11-49-18-pm

Typically, applications are deployed to Lambda by uploading a ZIP file of the Lambda project. My project has a very simple file structure:

  • AlexaSkill.js: A base class provided by Amazon that I can inherit
  • index.js: My application
  • node_modules: Any third party node.js modules

For this project, I didn’t have any third party node.js modules, so node_modules is empty. In my index.js file, I started with the following:

'use strict';

// Link to our Alexa Skill (see next section):
var APP_ID = "amzn1.ask.skill.8b0b2dac-5031-4257-961d-3daccb68642f";

// The AlexaSkill prototype and helper functions:
var AlexaSkill = require('./AlexaSkill');

// Include the HTTP lib so we can call our PHP API:
var http = require('http');

// Our implementation:
var HardinHome = function () {
 AlexaSkill.call(this, APP_ID);
};

// Extend AlexaSkill:
HardinHome.prototype = Object.create(AlexaSkill.prototype);
HardinHome.prototype.constructor = HardinHome;

HardinHome.prototype.eventHandlers.onSessionStarted = function (sessionStartedRequest, session)
{
    // Any session init logic would go here...
};

HardinHome.prototype.eventHandlers.onLaunch = function (launchRequest, session, response)
{
    getWelcomeResponse(response);
};

HardinHome.prototype.eventHandlers.onSessionEnded = function (sessionEndedRequest, session)
{
    // Any session cleanup logic would go here...
};

Now that our base implementation is setup, we need to define our intent handlers. These are hooks that receive calls from the Alexa SDK when Alexa matches a particular speech pattern, which will be defined below in our Alexa SDK implementation:

HardinHome.prototype.intentHandlers =
{
    "CarsDriven": function (intent, session, response)
    {
        getCarsDriven(intent, session, response);
    },
 
    "CarsDrive": function (intent, session, response)
    {
        getCarsDrive(intent, session, response);
    },
 
    "CarsMoreDetail": function (intent, session, response)
    {
        getCarsMoreDetail(intent, session, response);
    },

    "CarsNoMoreDetail": function (intent, session, response)
    {
        response.tell("");
    },

    "AMAZON.HelpIntent": function (intent, session, response)
    {
        helpTheUser(intent, session, response);
    },

    "AMAZON.StopIntent": function (intent, session, response)
    {
        var speechOutput = "Goodbye";
        response.tell(speechOutput);
    },

    "AMAZON.CancelIntent": function (intent, session, response)
    {
        var speechOutput = "Goodbye";
        response.tell(speechOutput);
    }
};

From there, I needed to actually define the three key functions that are called in the block above: getCarsDriven, getCarsDrive, getCarsMoreDetail. The first asks Alexa when I last drove a car, the second tells Alexa I drove a car, and the third asks Alexa for more information about a car. That last call was something I implemented purely to experiment with Alexa’s conversational abilities, where she could ask me if I wanted more information about a car and could provide it if I responded yes.

getCarsDriven

function getCarsDriven(intent, session, response)
{
    var speechText = "",
    repromptText = "",
    speechOutput,
    repromptOutput;
 
    var car = intent.slots.Car.value;
    session.attributes['car'] = car;
 
    var request_car = "";
 
    if (car.toLowerCase() == "mercedes")
    {
        request_car = "Mercedes";
    }
    else if (car.toLowerCase() == "porsche")
    {
        request_car = "Porsche";
    }
    else if (car.toLowerCase() == "jaguar")
    {
        request_car = "Jaguar";
    }
    else
    {
        request_car = "Ford";
    }
 
    http.get("http://172.31.63.164/cars/last_driven.php?car=" + request_car, function (res)
    {
        var noaaResponseString = '';
        res.on('data', function (data)
        {
            noaaResponseString += data;
        });

        res.on('end', function ()
        {
            speechText = noaaResponseString;
            repromptText = "Would you like to learn more about that car? Please say yes or no.";
 
            speechOutput =
            {
                speech: speechText,
                type: AlexaSkill.speechOutputType.PLAIN_TEXT
            };

            repromptOutput =
            {
                speech: repromptText,
                type: AlexaSkill.speechOutputType.PLAIN_TEXT
            };

            response.askWithCard(speechOutput, repromptOutput, "Hardin Home: Cars", speechText);
        });
    });
 }

There are a couple of things to note in the above function:

  1. The function receives three arguments: intent, session, and response. The intent is an object that contains all of the input from Alexa, including custom variables mapped to custom slot types that I defined (see the next session). The session variable is an object that I can write to. This lets me preserve information across multiple Alexa calls, which is critical for maintaining state in a conversation. For example, I’d want to store the car being discussed so that if  I ask Alexa for more information about that car, I don’t have to repeat the name to Alexa in every sentence I speak. Finally, the response is an object that I call when I’m ready to return data. I can call response’s methods from within an asynchronous block, which is huge for this specific implementation since the intent function can return before I receive data back from an HTTP request, and I want to wait to call the response until I have data.
  2. The block of if statements that smooth the input is fairly important, since we don’t know what case we’re going to get back from Alexa. It also lets us account for things like homonyms if we’re not using a set custom slot type.
  3. Finally, I make an HTTP request to my EC2 server, and when I get data back I respond to Alexa. I call the askWithCard() method on the response object, which allows me to say a sentence (speechOutput), send a reprompt sentence (repromptOutput), and then send some text to display on a card view in the Alexa app, which will be visible from the iOS/Android app and will automatically appear on the Kindle Fire that I have paired with my Echoes.

getCarsDrive

function getCarsDrive(intent, session, response)
{
    var speechText = "",
    repromptText = "",
    speechOutput,
    repromptOutput;
 
    var car = intent.slots.Car.value;
    session.attributes['car'] = car;
 
    var request_car = "";
 
    if (car.toLowerCase() == "mercedes")
    {
        request_car = "Mercedes";
    }
    else if (car.toLowerCase() == "porsche")
    {
        request_car = "Porsche";
    }
    else if (car.toLowerCase() == "jaguar")
    {
        request_car = "Jaguar";
    }
    else
    {
        request_car = "Ford";
    }

    http.get("http://172.31.63.164/cars/drive.php?car=" + request_car, function (res)
    {
        var noaaResponseString = '';
        res.on('data', function (data)
        {
            noaaResponseString += data;
        });

        res.on('end', function ()
        {
            speechText = "Alright, I've recorded that you're driving the " + car + " today!";
 
            speechOutput =
            {
                speech: speechText,
                type: AlexaSkill.speechOutputType.PLAIN_TEXT
            };

            response.tellWithCard(speechOutput, "Hardin Home", speechText);
        });
    });
}

getCarsMoreDetail

function getCarsMoreDetail(intent, session, response)
{
    var speechText = "",
    repromptText = "",
    speechOutput,
    repromptOutput;
 
    var car = session.attributes['car'];
    if (car == undefined) car = "mercedes";
    var request_car = "";

    if (car.toLowerCase() == "mercedes")
    {
        request_car = "Mercedes";
    }
    else if (car.toLowerCase() == "porsche")
    {
        request_car = "Porsche";
    }
    else if (car.toLowerCase() == "jaguar")
    {
        request_car = "Jaguar";
    }
    else
    {
        request_car = "Ford";
    }
 
    http.get("http://172.31.63.164/cars/more_info.php?car=" + request_car, function (res)
    {
        var noaaResponseString = '';
        res.on('data', function (data)
        {
            noaaResponseString += data;
        });

        res.on('end', function ()
        {
            speechText = "Here is some more detail about the " + car + ": " + noaaResponseString;

            speechOutput =
            {
                speech: speechText,
                type: AlexaSkill.speechOutputType.PLAIN_TEXT
            };

            response.tellWithCard(speechOutput, "Hardin Home", speechText);
        });
    });
}

Lastly, I needed to define a hook to call all of the code I just wrote in response to Alexa input:

// Create the handler that responds to the Alexa Request:
exports.handler = function (event, context)
{
    var hardinHome = new HardinHome();
    hardinHome.execute(event, context);
};

Alexa SDK Implementation

After publishing the Lambda function, Amazon assigns it a ARN, which is a unique identifier that allows it to be called from other AWS services. A Lambda ARN looks something like this:

arn:aws:lambda:us-east-1:123456789:function:HardinHome

Note that Alexa can currently only call Lambda functions that are in the us-east-1 region (Northern Virginia) and eu-west-1 (Ireland), so my Lambda skill needs to be deployed there and have a corresponding ARN to be visible in Alexa. To create the Alexa app, I go to the Alexa SDK developer page and add a new skill. I set the skill information like so:

screen-shot-2016-12-15-at-11-55-44-am

After that, I point it at my Lambda function:

screen-shot-2016-12-15-at-12-00-27-pm

All that is left now is to define my interaction model, which specifies how I can talk to Alexa to activate the skill, and to test it. The skill will be automatically deployed to all of my Echoes, since my Alexa developer account is linked to my normal Amazon account that is associated with the Echo. My interaction model consists of several parts:

  • Intent Schema: This is a JSON structure that maps all of the callbacks that I defined in my Lambda function, and describes any variables that will be mined from the words that I speak to Alexa.
  • Custom Slot Types: These are custom enums that allow me to define options that Alexa can match. For example, I might define a custom slot type of “car”, with the options being the various cars that I own.
  • Sample Utterances: These are sample English phrases that are associated with intents in the intent schema, with wildcard variables that correspond to either custom or built-in slot types.

In the case of this skill, here is my intent schema (the intents should look familiar from the node.js code that I installed in Lambda):

{
 "intents": [
    {
        "intent": "CarsDriven",
        "slots": [
            {
                "name": "Car",
                "type": "LIST_OF_CARS"
            }
        ]
    },
    {
        "intent": "CarsDrive",
        "slots": [
            {
                "name": "Car",
                "type": "LIST_OF_CARS"
            }
        ]
    },
    {
        "intent" : "CarsMoreDetail" 
    },
    {
        "intent" : "CarsNoMoreDetail" 
    },
    {
        "intent": "AMAZON.HelpIntent"
    },
    {
        "intent": "AMAZON.StopIntent"
    },
    {
        "intent": "AMAZON.CancelIntent"
    }
 ]
}

The only custom slot type referenced above is LIST_OF_CARS, which is defined as:

mercedes | porsche | jaguar | ford | truck

Finally, here are my sample utterances, which reference both the custom slots and the intent schema:

CarsDriven when was {Car} last driven
CarsDriven what day was {Car} last driven
CarsDriven when did I last drive the {Car}
CarsDriven when I last drove the {Car}

CarsMoreDetail tell me more about that car
CarsMoreDetail yes
CarsMoreDetail yeah

CarsNoMoreDetail no
CarsNoMoreDetail nope

CarsDrive I drove the {Car} today
CarsDrive I'm driving the {Car} today

It should be fairly easy to follow, but the sample utterances allow me to talk to Alexa and say something like, “Alexa, ask Hardin Home when I last drove the Jaguar.” Alexa will respond, “You last drove the Jaguar on Monday. Would you like to learn more about this car? Please answer yes or no.” I can respond yes and be read a little blurb about the car, or no and Alexa will stop talking. I can also say, “Alexa, tell Hardin Home that I’m driving the truck today,” and Alexa will respond with, “Alright, I’ve recorded that you’re driving the truck today.” This discussion is exactly that I set out to do in my requirements above, so I’m done!

I enable the skill for testing and send it to my Echoes:

screen-shot-2016-12-15-at-12-16-34-pm

I can then use the handy debug console to send text snippets to my service, and examine the output:

screen-shot-2016-12-15-at-12-16-43-pm

I can also actually use the skill on my Echo, and everything works as expected!

Conclusion

This is obviously just an initial implementation for the potential capabilities of this skill. Aside from refactoring the API to utilize a micro-framework, there are a lot of cool things that could be done. I could add reporting capabilities to allow Alexa to respond to queries like, “How many times in the last three months have I driven the Porsche?” I could also add an integration for Arlo or SmartThings and IFTTT that utilizes motion sensors to automatically log when cars are taken out, instead of me having to tell Alexa. The possibilities are, as with most home automation tasks, essentially endless.