All posts by Jon Hardin

Woodworking and Trim Work

I’m working on improving my woodworking and trim skills through several minor projects in the house this winter. I’m planning on replacing the railing on both stairwells with nicer wood and better hardware. Also, I’d like for the railing wood to be continuous, with smooth corners and no cutoffs in the corners. Here is what it looks like now:

16195713_10106319842122607_911960048780607347_n

You can purchase rounded corner sections that can be joined to the straight sections of wood, in red oak, which looks very nice when stained:

16298882_10106319842057737_5571109041089293197_n

Before I do that, I’m learning how to get the right color and finish on the wood, and to join the corners to the straight sections seamlessly:

16387122_10106319842017817_603731991351141403_n

In addition to the railing project, I’m also learning how to do crown molding. I’m planning on putting crown molding in the guest bedroom first, since it is a fairly standard rectangular room that will be a good proving ground for us to see how it will look in the other bedrooms in our house. My goal is to match the stain color with the other trim in our house, which is a golden pecan stained oak. Crown molding is somewhat complicated to install, with the biggest challenge being cutting and aligning the corners perfectly. The DIY Network has a good tutorial video on how to install it. My goal is to achieve an effect something like the following:

7f2e689f39059176352092c8ca4e7e9d

b50c2353179bd04491291eb6a06487d4

Shower Door

Last winter, I remodeled the bathroom off of our kitchen. It originally had a lot of fairly tacky 1990’s style gold fixtures in it, like this:

12208403_10104757555502517_7600450940840781489_n

The end result turned out pretty well, and definitely as a different look from the original:

12321489_10104821009415367_3495791571619859795_n

12799066_10105060116273307_7306214355027092803_n

12141637_10105065007236777_7557860304636611844_n

However, I didn’t get around to installing a door on the shower, and recently installed a clear glass door with an oil-rubbed bronze frame from Sterling, which matches the sink and shower fixtures. It was a fairly straightforward installation, with the only challenge being anchoring the either side of the frame into the wall through the arabesque tiles on the shower walls.

16864408_10106409394399127_8514454690212783732_n

Here is the final result, after the door was fully installed:

16998835_10106433829121747_2585039373580396790_n

16938674_10106433829171647_3768867225909704774_n

Library Chandelier

Whenever I go to New Orleans, one of my favorite things to do is to shop for antiques on Royal Street (after grabbing a $1.50 lunch martini and New Orleans style barbecued shrimp at Mr. B’s). Royal Street is known for antiques, specifically for chandeliers. On my most recent trip, I wanted to look for a chandelier for my library, specifically one in wrought iron to complement the existing dark woods and wrought iron in the library:

16806745_10106433829495997_5410612188132650656_n

Several blocks down Royal Street from Canal, I walked into Royal Antiques. Royal Antiques, located at 309 Royal Street in the French Quarter, is a fourth generation family business specializing in 17th, 18th and 19th century English, French and Continental furniture and decorative accessories:

screen-shot-2017-02-28-at-7-59-59-pm-copy

After getting a tour and talking with one of the saleswomen at the store, I found an incredible wrought iron chandelier from the 1880’s that was originally from a wine bar in Marseilles. I like that bit of history, since my library is right next to my wine cellar, so the wine bar vibe was cool. The chandelier was originally used to hold candles, but had been retrofitted for electricity sometime in the early to mid twentieth century. After negotiating the price down a bit, I bought the chandelier and had it shipped back to Madison. It only weighed 33 pounds, so shipping came in at $70, which certainly beat trying to carry it onto an airplane. Thankfully, it arrived in one piece:

16830983_10106405292279807_3549651551443325887_n

The chandelier didn’t come with many of the components required to install in the library, it basically just had two wires sticking out the top, and a chain from which to hang it. I went to the Home Depot and bought a Screw Collar Loop Kit from Westinghouse, which gave me the hardware that I needed to anchor the light into the ceiling light box:

fc1dda3e-f10f-4423-a611-d5a2753a8f6c_1000

I spray painted the loop and bought a wrought iron s-hook, and with some simple wiring, the chandelier was operational!

16832032_10106405292194977_8489197810585607329_n

It provides great light, and really complements the look of the library:

16998880_10106433829535917_1141748013990105076_n

17022283_10106433829491007_8625104759373295312_n

16996247_10106433829441107_1360322075070698358_n

Choosing the Right Smart Switch

One of the primary goals of my initial home automation installation was to install smart light switches that would allow me to turn lights on and off with my voice. While it is cool to be able to walk into a room and say “Alexa, turn on the living room,” the real power comes when you can setup scenes that work with multiple lights at once. I can say “Alexa, turn on the downstairs” and turn on the normal set of lights that I like to have turned on for the middle level of our house. I can also say, “Alexa, turn on the whole downstairs” and all the lights will be turned on downstairs. Finally, I can say “Alexa, turn off the whole house” when I walk out the door and Alexa will turn off any lights in the house that happen to be on:

img_9600

Virtually all mainstream wifi smart switches work with the Echo platform, so compatibility was not an issue. With that in mind, my first inclination was to use the WeMo platform, since that was one of the most highly marketed platforms that I had seen in stores:

372-jpgcq5dam-web-372-372

WeMo makes a small number of products, and their two mainline products that I chose to work with are the WeMo Switch ($34.99 per switch) and the WeMo Light Switch ($49.99 per switch). The Switch is a smart outlet that plugs into an existing outlet, and the Light Switch is an actual light switch that installs into the wall in place of a traditional switch. The Switch works as advertised. It is a little clunky to pair, since it communicates directly with the wifi router but has to be connected to the router initially via the WeMo app. Once that is done, however, it gets its job done and allows you to turn lights on and off that are powered by an outlet. The setup through the WeMo app and the Alexa app are pretty straightforward:

img_9599

img_9601

The problems came when working with the WeMo Light Switches. Like the other Switches, they are somewhat clunky to pair but do function, but they have a critical achilles heel. If you read the specifications, it says this:

Replaces single pole switch. Not compatible with 3 way (multi location control) switches.

This means that if you have a light with multiple switches that turn it on and off (which is extremely common), the WeMo light switch will not work for you. This sent me back to the drawing board. I landed on the Caséta Wireless platform from Lutron ($54.99 per switch). These support three way switches, and also come with their own base station that connects to the wifi router via an ethernet cable. This is a small risk, since you are dependent on that base station continuing to function or the switches become useless, but I mitigated that by buying a backup base station in case Lutron stops making the platform in the future. The benefit of the base station is that pairing the switches to wifi is much smoother. The switches are also higher quality, and come with dimmer capabilities:

lutron-caseta-review

The Lutron app is fine, and they integrate with the Alexa app just like WeMo:

img_9602

Most importantly, they support wiring for three way functionality:

screen-shot-2017-01-26-at-9-29-27-pm

I would highly recommend the Caséta platform, and have installed 16 switches throughout my house to complement the six Echoes that give me complete smart lighting voice activation in every room of my house.

New Master Bedroom Set

I’m excited to have purchased a new bedroom set from Restoration Hardware, along with a mattress from Casper, for our master bedroom. For a long time we’ve had mix and match furniture in that room, but now it will have a beautiful complete set from RH’s Montpellier collection.

Screen Shot 2016-12-28 at 11.55.47 AM.png

Upgrading Tacky 90’s Bathroom Fixtures

A year ago, I did a full remodel of the bathroom off of our kitchen, on the middle floor of our house. It went from looking like this:

12208403_10104757555502517_7600450940840781489_n

To this:

12321489_10104821009415367_3495791571619859795_n

12799066_10105060116273307_7306214355027092803_n

This year, I wanted to improve and modernize the look of our four other bathrooms, without doing a full remodel. These bathrooms had been repainted already, and had vanities and wood that I still liked. Their biggest issue was that they had a lot of very 90’s clear plastic and white fixtures on the sinks, as well as in the showers. Simply upgrading the figures and replacing the shower hardware to match the bronze that is used throughout the rest of our house’s cabinetry would be a huge improvement. This is the type of fixtures that were in place before:

remove_drain_stick2

In order to match the bronze used in other fixtures throughout the house, I selected this faucet from Kohler for the sinks:

92363731-dd4e-4c79-856c-87ad6e6618dc_1000

I purchased six of these, and was able install each one in about 30 minutes (including removing the old fixtures). Here was the end result, which was a massive improvement!

14938209_10105894776607307_8633849774176312374_n

14907667_10105894776572377_4359520233530643890_n

After that, it was time to move on to dealing with the showers. I didn’t want to redo the plumbing (which would have involved either ripping out drywall to get at the pipes from the back, or ripping out and replacing the shower walls. Therefore, I purchased four of the following DANCO trim kit to replace the chrome Moen fixtures:

560c8d0e-3780-4940-825e-dd4b838e17d4_1000

To go with that, I also purchased three of these shower heads (leaving the master shower head alone for now):

19eed4d4-3f83-48d5-b149-0a3636082f50_1000

Installing all of these, the showers started to look a lot better:

15541187_10106112963014997_3843981149805522390_n

14906922_10105894776617287_5825967586365869931_n

15541290_10106121165242667_582295247473464554_n

Lastly, I purchased a kit to replace the handles of the toilets in all of the bathrooms to again match the bronze style of the other fixtures:

14595817_10105894776702117_5850203182391194155_n

However, there were still some issues. The necks for the shower heads had been sweated onto the pipes when the showers were initially installed (instead of screwed in), so there was no easy way to replace them without getting into more plumbing than I wanted to do for this project. Also, the Jack-and-Jill bathroom in between our guest bedroom and my wife’s office had a bathtub with fixtures that were not easy to remove or replace without breaking them and potentially damaging the tub. Therefore, I picked a simple solution: using matte-bronze metallic spray paint, which did a fantastic job of matching the finish of the existing fixtures to the bronze of the newly installed fixtures:

15622533_10106121165282587_1991643901983110291_n.jpg

All of the above work was completed in a couple of casual weekends, which is a lot of bang for my buck in terms of time. Perhaps I will do a more complete remodel of some of the bathrooms in the future, but for now I have four much better looking bathrooms that don’t feel like they are straight out of the 90’s.

Integrating Arlo, Wemo, and Echo via IFTTT

While writing custom Alexa skills has been necessary for some of the hyper-specific home automation tasks I’ve wanted to do, there are often much more straightforward ways to integrate different smart home platforms. The best of these is IFTTT, which stands for if this, then that. It allows you utilize pre-built (or create new) applets, which activate a feature of one home automation platform in response to input from another. For example:

screen-shot-2016-12-24-at-10-03-01-am

The first thing I used IFTTT for in my house was integrating Alexa and Arlo, since Arlo does’t have an official skill for Alexa. I wanted to utilize Arlo’s motion sensing capabilities to trigger lights, and also allow Alexa to arm and disarm the Arlo security system. I usually have the Arlo set on a timer, but if I am running late for work and the Arlo has kicked in before I left, it starts to blow up my phone with push notifications, and it’s useful to just be able to tell whichever Echo I’m nearest to, “Alexa, trigger disarm Arlo” to make the notifications stop. In order to set this up, I added my Amazon and Arlo accounts to IFTTT via their SSO integration, and then I created a custom applet. First, I picked Echo/Alexa as the base technology:

screen-shot-2016-12-24-at-10-09-21-am

Then, I selected a trigger to activate the applet based on a specific phrase:

screen-shot-2016-12-24-at-10-09-32-am

screen-shot-2016-12-24-at-10-09-54-am

Now that my trigger was defined, I had to select Arlo as the technology that would be activated in response to the trigger:

screen-shot-2016-12-24-at-10-10-26-am

Arlo only gives you a few options to activate via IFTTT, but one of those is disarm, which is what I was looking for:

screen-shot-2016-12-24-at-10-10-44-am

screen-shot-2016-12-24-at-10-10-56-am

I selected the ID for my Arlo base station, and I saved and activated the service:

screen-shot-2016-12-24-at-10-11-12-am

Just like that, I was able to disarm Arlo with my voice, no programming required! After that, I created an applet to turn on Wemo switches by IFTTT (no Echo required), and integrated Alexa with several other web-based services.

Custom Alexa Skill for Tracking Car Use

Over the last several weeks, I have been adding various home automation technologies to the house: Arlo for home security, Wemo and Lutron Caseta for automated lighting, and Amazon Echo/Alexa for voice control. Out of the box, Alexa’s integration with other smart home technologies is pretty good. It doesn’t take any custom work to be able to use your voice to turn lights on and off, and integrating Alexa with Arlo was fairly straightforward using the IFTTT service, which allows for basic “if this, then that” style applets that can be triggered via voice through Alexa.

However, in order to build a true smart home, I wanted to be able to write my own applications that could be executed within my IOT ecosystem that would serve needs very custom to me. A few of the initial ideas I had were:

  • Wine Cellar Integration: I want to be able to ask Alexa if we have a particular bottle in stock, and if so how many bottles we have. This would require integrating an Alexa skill with Vinocell, a wine cellar management application that I use.
  • Madison Restaurant Ideas: My wife and I frequently are indecisive about where to eat dinner. I want to be able to ask Alexa for ideas, tailored to our specific preferences and location, beyond what an app like Urban Spoon could provide.
  • Car Tracking: As a sports car collector, I have many cars. I often find myself wondering, when was the last time I actually drove the Porsche? How often in the last month or two have I driven the Porsche?

This post will focus on the last idea. It struck me as a fairly good first Alexa project, since it wouldn’t involve integrations with any third party APIs, just APIs that I’d have to develop to store the requisite data.

Requirements

I typically interact with Alexa every morning on my way out the door. I ask “what’s new?” to get my daily news briefing, ask what is on my calendar, ask about the weather, and ask about my commute. The goal for the skill is to be able to say, “Alexa, tell Hardin Home that I’m driving the Mercedes today.” Alexa will record a timestamp that I drove the Mercedes that day in a database, and retrieve that information when I say “Alexa, ask Hardin Home when I last drove the Mercedes” in the form of a sentence like, “You last drove the Mercedes six days ago.”

Architecture

This project would involve several components: An Alexa skill, which would call a function on AWS Lambda (written in node.js), which would call a series of very PHP APIs hosted on an Ubuntu/Apache EC2 instance with a MySQL database for storing the data about the cars. The EC2 instance would be placed inside of a VPC, with an AWS security group limiting access on port 80 solely to the VPC. This allows me to grant the Lambda function access to the VPC, so that it (and only it) can interact with my API, preventing me from having to implement a lot of additional security measures like I’d have to if the EC2 instance were open to the outside world.

screen-shot-2016-12-14-at-11-20-13-pm

API Implementation

To implement the API, I created a t2.small EC2 instance, and assigned it an elastic IP. I setup a security group that opened all ports within the VPC, and then granted access to my home IP on port 80 and 22 in order to allow me to connect to the server and deploy code, as well as to test web services from a browser:

screen-shot-2016-12-14-at-11-34-37-pmOnce this was done, I SSH’ed into my server and installed a basic LAMP stack:

sudo apt-get update
sudo apt-get install lamp-server^

After this, I installed phpMyAdmin, and created a database called hardin_home. I added two simple tables, cars, and cars_driven. The cars table holds information about each car (which will be used later in a sample conversational query with Alexa), and the cars_driven holds a list of timestamps for when each car was driven:

screen-shot-2016-12-14-at-11-38-41-pm

I implemented three quick-and-dirty PHP services that can be called by the Lambda implementation. An obvious refactor of this implementation would be to implement the API via a proper framework or microframework, but in this case I wanted to be able to crank out the API calls in five minutes, so they are just manual PHP. They are:

drive.php

$con = mysqli_connect("localhost", "XXXXX", "XXXXX", "hardin_home");
$result = $con->query("insert into cars_driven (car) values ('" . $con->real_escape_string($_REQUEST['car']) . "')");

mysqli_close($con);

last_driven.php

$con = mysqli_connect("localhost", "XXXXX", "XXXXX", "hardin_home");

$result = $con->query("select * from cars_driven where car = '" . $con->real_escape_string($_REQUEST['car']) . "' order by id desc limit 1");
$row = $result->fetch_assoc();

$timestamp = $row['driven_timestamp'];

if (date('Ymd') == date('Ymd', strtotime($timestamp)))
{
    echo "You last drove the " . $_REQUEST['car'] . " today.";
}
else
{
    echo "You last drove the " . $_REQUEST['car'] . " " . humanTiming(strtotime($timestamp)) . " ago, on " . date('F j, Y', strtotime($timestamp)) . ".";
}

mysqli_close($con);

more_info.php

$con = mysqli_connect("localhost", "XXXXX", "XXXXX", "hardin_home");

$result = $con->query("select * from cars where name = '" . $con->real_escape_string($_REQUEST['car']) . "'");
$row = $result->fetch_assoc();

echo $row['description'];

mysqli_close($con);

Lambda Implementation

According to Amazon, “AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.” Currently, Lambda supports node.js, Python, and Java. For this implementation, I selected node.js. First, I needed to configure a Lambda application to use node.js, and assign it a role to access the VPC that I setup earlier:

screen-shot-2016-12-14-at-11-48-50-pm

Under advanced settings, I gave it explicit access to my VPC (and thus my PHP services on my EC2 instance):

screen-shot-2016-12-14-at-11-49-00-pm

Prior to writing and deploying my node.js application package to Lambda, I needed to setup how the Lambda function would be triggered. Obviously for this implementation, the trigger would be an Alexa call:

screen-shot-2016-12-14-at-11-49-18-pm

Typically, applications are deployed to Lambda by uploading a ZIP file of the Lambda project. My project has a very simple file structure:

  • AlexaSkill.js: A base class provided by Amazon that I can inherit
  • index.js: My application
  • node_modules: Any third party node.js modules

For this project, I didn’t have any third party node.js modules, so node_modules is empty. In my index.js file, I started with the following:

'use strict';

// Link to our Alexa Skill (see next section):
var APP_ID = "amzn1.ask.skill.8b0b2dac-5031-4257-961d-3daccb68642f";

// The AlexaSkill prototype and helper functions:
var AlexaSkill = require('./AlexaSkill');

// Include the HTTP lib so we can call our PHP API:
var http = require('http');

// Our implementation:
var HardinHome = function () {
 AlexaSkill.call(this, APP_ID);
};

// Extend AlexaSkill:
HardinHome.prototype = Object.create(AlexaSkill.prototype);
HardinHome.prototype.constructor = HardinHome;

HardinHome.prototype.eventHandlers.onSessionStarted = function (sessionStartedRequest, session)
{
    // Any session init logic would go here...
};

HardinHome.prototype.eventHandlers.onLaunch = function (launchRequest, session, response)
{
    getWelcomeResponse(response);
};

HardinHome.prototype.eventHandlers.onSessionEnded = function (sessionEndedRequest, session)
{
    // Any session cleanup logic would go here...
};

Now that our base implementation is setup, we need to define our intent handlers. These are hooks that receive calls from the Alexa SDK when Alexa matches a particular speech pattern, which will be defined below in our Alexa SDK implementation:

HardinHome.prototype.intentHandlers =
{
    "CarsDriven": function (intent, session, response)
    {
        getCarsDriven(intent, session, response);
    },
 
    "CarsDrive": function (intent, session, response)
    {
        getCarsDrive(intent, session, response);
    },
 
    "CarsMoreDetail": function (intent, session, response)
    {
        getCarsMoreDetail(intent, session, response);
    },

    "CarsNoMoreDetail": function (intent, session, response)
    {
        response.tell("");
    },

    "AMAZON.HelpIntent": function (intent, session, response)
    {
        helpTheUser(intent, session, response);
    },

    "AMAZON.StopIntent": function (intent, session, response)
    {
        var speechOutput = "Goodbye";
        response.tell(speechOutput);
    },

    "AMAZON.CancelIntent": function (intent, session, response)
    {
        var speechOutput = "Goodbye";
        response.tell(speechOutput);
    }
};

From there, I needed to actually define the three key functions that are called in the block above: getCarsDriven, getCarsDrive, getCarsMoreDetail. The first asks Alexa when I last drove a car, the second tells Alexa I drove a car, and the third asks Alexa for more information about a car. That last call was something I implemented purely to experiment with Alexa’s conversational abilities, where she could ask me if I wanted more information about a car and could provide it if I responded yes.

getCarsDriven

function getCarsDriven(intent, session, response)
{
    var speechText = "",
    repromptText = "",
    speechOutput,
    repromptOutput;
 
    var car = intent.slots.Car.value;
    session.attributes['car'] = car;
 
    var request_car = "";
 
    if (car.toLowerCase() == "mercedes")
    {
        request_car = "Mercedes";
    }
    else if (car.toLowerCase() == "porsche")
    {
        request_car = "Porsche";
    }
    else if (car.toLowerCase() == "jaguar")
    {
        request_car = "Jaguar";
    }
    else
    {
        request_car = "Ford";
    }
 
    http.get("http://172.31.63.164/cars/last_driven.php?car=" + request_car, function (res)
    {
        var noaaResponseString = '';
        res.on('data', function (data)
        {
            noaaResponseString += data;
        });

        res.on('end', function ()
        {
            speechText = noaaResponseString;
            repromptText = "Would you like to learn more about that car? Please say yes or no.";
 
            speechOutput =
            {
                speech: speechText,
                type: AlexaSkill.speechOutputType.PLAIN_TEXT
            };

            repromptOutput =
            {
                speech: repromptText,
                type: AlexaSkill.speechOutputType.PLAIN_TEXT
            };

            response.askWithCard(speechOutput, repromptOutput, "Hardin Home: Cars", speechText);
        });
    });
 }

There are a couple of things to note in the above function:

  1. The function receives three arguments: intent, session, and response. The intent is an object that contains all of the input from Alexa, including custom variables mapped to custom slot types that I defined (see the next session). The session variable is an object that I can write to. This lets me preserve information across multiple Alexa calls, which is critical for maintaining state in a conversation. For example, I’d want to store the car being discussed so that if  I ask Alexa for more information about that car, I don’t have to repeat the name to Alexa in every sentence I speak. Finally, the response is an object that I call when I’m ready to return data. I can call response’s methods from within an asynchronous block, which is huge for this specific implementation since the intent function can return before I receive data back from an HTTP request, and I want to wait to call the response until I have data.
  2. The block of if statements that smooth the input is fairly important, since we don’t know what case we’re going to get back from Alexa. It also lets us account for things like homonyms if we’re not using a set custom slot type.
  3. Finally, I make an HTTP request to my EC2 server, and when I get data back I respond to Alexa. I call the askWithCard() method on the response object, which allows me to say a sentence (speechOutput), send a reprompt sentence (repromptOutput), and then send some text to display on a card view in the Alexa app, which will be visible from the iOS/Android app and will automatically appear on the Kindle Fire that I have paired with my Echoes.

getCarsDrive

function getCarsDrive(intent, session, response)
{
    var speechText = "",
    repromptText = "",
    speechOutput,
    repromptOutput;
 
    var car = intent.slots.Car.value;
    session.attributes['car'] = car;
 
    var request_car = "";
 
    if (car.toLowerCase() == "mercedes")
    {
        request_car = "Mercedes";
    }
    else if (car.toLowerCase() == "porsche")
    {
        request_car = "Porsche";
    }
    else if (car.toLowerCase() == "jaguar")
    {
        request_car = "Jaguar";
    }
    else
    {
        request_car = "Ford";
    }

    http.get("http://172.31.63.164/cars/drive.php?car=" + request_car, function (res)
    {
        var noaaResponseString = '';
        res.on('data', function (data)
        {
            noaaResponseString += data;
        });

        res.on('end', function ()
        {
            speechText = "Alright, I've recorded that you're driving the " + car + " today!";
 
            speechOutput =
            {
                speech: speechText,
                type: AlexaSkill.speechOutputType.PLAIN_TEXT
            };

            response.tellWithCard(speechOutput, "Hardin Home", speechText);
        });
    });
}

getCarsMoreDetail

function getCarsMoreDetail(intent, session, response)
{
    var speechText = "",
    repromptText = "",
    speechOutput,
    repromptOutput;
 
    var car = session.attributes['car'];
    if (car == undefined) car = "mercedes";
    var request_car = "";

    if (car.toLowerCase() == "mercedes")
    {
        request_car = "Mercedes";
    }
    else if (car.toLowerCase() == "porsche")
    {
        request_car = "Porsche";
    }
    else if (car.toLowerCase() == "jaguar")
    {
        request_car = "Jaguar";
    }
    else
    {
        request_car = "Ford";
    }
 
    http.get("http://172.31.63.164/cars/more_info.php?car=" + request_car, function (res)
    {
        var noaaResponseString = '';
        res.on('data', function (data)
        {
            noaaResponseString += data;
        });

        res.on('end', function ()
        {
            speechText = "Here is some more detail about the " + car + ": " + noaaResponseString;

            speechOutput =
            {
                speech: speechText,
                type: AlexaSkill.speechOutputType.PLAIN_TEXT
            };

            response.tellWithCard(speechOutput, "Hardin Home", speechText);
        });
    });
}

Lastly, I needed to define a hook to call all of the code I just wrote in response to Alexa input:

// Create the handler that responds to the Alexa Request:
exports.handler = function (event, context)
{
    var hardinHome = new HardinHome();
    hardinHome.execute(event, context);
};

Alexa SDK Implementation

After publishing the Lambda function, Amazon assigns it a ARN, which is a unique identifier that allows it to be called from other AWS services. A Lambda ARN looks something like this:

arn:aws:lambda:us-east-1:123456789:function:HardinHome

Note that Alexa can currently only call Lambda functions that are in the us-east-1 region (Northern Virginia) and eu-west-1 (Ireland), so my Lambda skill needs to be deployed there and have a corresponding ARN to be visible in Alexa. To create the Alexa app, I go to the Alexa SDK developer page and add a new skill. I set the skill information like so:

screen-shot-2016-12-15-at-11-55-44-am

After that, I point it at my Lambda function:

screen-shot-2016-12-15-at-12-00-27-pm

All that is left now is to define my interaction model, which specifies how I can talk to Alexa to activate the skill, and to test it. The skill will be automatically deployed to all of my Echoes, since my Alexa developer account is linked to my normal Amazon account that is associated with the Echo. My interaction model consists of several parts:

  • Intent Schema: This is a JSON structure that maps all of the callbacks that I defined in my Lambda function, and describes any variables that will be mined from the words that I speak to Alexa.
  • Custom Slot Types: These are custom enums that allow me to define options that Alexa can match. For example, I might define a custom slot type of “car”, with the options being the various cars that I own.
  • Sample Utterances: These are sample English phrases that are associated with intents in the intent schema, with wildcard variables that correspond to either custom or built-in slot types.

In the case of this skill, here is my intent schema (the intents should look familiar from the node.js code that I installed in Lambda):

{
 "intents": [
    {
        "intent": "CarsDriven",
        "slots": [
            {
                "name": "Car",
                "type": "LIST_OF_CARS"
            }
        ]
    },
    {
        "intent": "CarsDrive",
        "slots": [
            {
                "name": "Car",
                "type": "LIST_OF_CARS"
            }
        ]
    },
    {
        "intent" : "CarsMoreDetail" 
    },
    {
        "intent" : "CarsNoMoreDetail" 
    },
    {
        "intent": "AMAZON.HelpIntent"
    },
    {
        "intent": "AMAZON.StopIntent"
    },
    {
        "intent": "AMAZON.CancelIntent"
    }
 ]
}

The only custom slot type referenced above is LIST_OF_CARS, which is defined as:

mercedes | porsche | jaguar | ford | truck

Finally, here are my sample utterances, which reference both the custom slots and the intent schema:

CarsDriven when was {Car} last driven
CarsDriven what day was {Car} last driven
CarsDriven when did I last drive the {Car}
CarsDriven when I last drove the {Car}

CarsMoreDetail tell me more about that car
CarsMoreDetail yes
CarsMoreDetail yeah

CarsNoMoreDetail no
CarsNoMoreDetail nope

CarsDrive I drove the {Car} today
CarsDrive I'm driving the {Car} today

It should be fairly easy to follow, but the sample utterances allow me to talk to Alexa and say something like, “Alexa, ask Hardin Home when I last drove the Jaguar.” Alexa will respond, “You last drove the Jaguar on Monday. Would you like to learn more about this car? Please answer yes or no.” I can respond yes and be read a little blurb about the car, or no and Alexa will stop talking. I can also say, “Alexa, tell Hardin Home that I’m driving the truck today,” and Alexa will respond with, “Alright, I’ve recorded that you’re driving the truck today.” This discussion is exactly that I set out to do in my requirements above, so I’m done!

I enable the skill for testing and send it to my Echoes:

screen-shot-2016-12-15-at-12-16-34-pm

I can then use the handy debug console to send text snippets to my service, and examine the output:

screen-shot-2016-12-15-at-12-16-43-pm

I can also actually use the skill on my Echo, and everything works as expected!

Conclusion

This is obviously just an initial implementation for the potential capabilities of this skill. Aside from refactoring the API to utilize a micro-framework, there are a lot of cool things that could be done. I could add reporting capabilities to allow Alexa to respond to queries like, “How many times in the last three months have I driven the Porsche?” I could also add an integration for Arlo or SmartThings and IFTTT that utilizes motion sensors to automatically log when cars are taken out, instead of me having to tell Alexa. The possibilities are, as with most home automation tasks, essentially endless.