Alexa, ask my TV to do something thanks to my Raspberry Pi
I started to connect various electronic devices to Alexa like lamps, switches, coffee machine but each time it has been with domotic objects that already had Alexa included on it.. I just moved in a new house and when I found my old Raspberry PI 2 in a box an idea occurred to me: what if my Raspberry could integrate my Alexa automation environment? So, I start my new project : how Alexa could control my old not-connected TV thanks my Raspberry PI and an infra-red emitter !
See below the overview of my actual project implementation:
This article will give you some tips to implement it in 3 parts :
- 1) How control television with Infrared (IR) emitter connect to the Raspberry with Lirc
- 2) How create some Alexa intents in order to control Raspberry in the Alexa Skill Developper Kit
- 3) How to link Alexa cloud with our raspberry NodeJS server
Materials needed
- At least one Alexa connect to your intern wifi
- x1 Raspberry or similar (here, I use and explain for Raspberry Pi 2 but can be adapt easily to other version)
- x1 IR Transmitter module. (In my case, I use Whadda WPM316 for Arduino) . You can add an IR emitter if you need to record your remote. In my case, I was lucky, I found the mapping easily on internet.
- x2 GPIO Cable (female/female)
- Television (and his TV remote if you need to record each key)
Control my television with IR emitter and Lirc
The first thing to do is to connect your IR LED. I choose to connect to the pin 18.
The second step is to install Lirc, the IR driver for Linux.
sudo apt-get install lirc
Here is my configuration: first, I edit /etc/lirc/lirc_options.conf as follow :
driver = default
device = /dev/lirc0
Then, edit the /boot/config.txt and uncomment (or adding if not in the file) this line :
dtoverlay=gpio-ir-tx,gpio_pin=18
In case you need also an IR receiver to record remote key, uncomment also
dtoverlay=gpio-ir,gpio_pin=17
Adapt GPIO_Pin if needed. For more documentation and recording about lirc installation on Raspberry check here and here.
Personally, my usage was to reproduce the Samsung TV Remote. You can easily found most of remote using the following keyword in Github or in your search engine :
lircd.conf <name of your remove device>
Lircd.conf is the file in /etc/lirc that contains all IR mapping. In my case it was writen AA59–00823A in my Samsung TV remote and thanks this number, I found this file on Github. If you’re not lucky, and can’t find the right file, plug a receiver on your raspberry, and Lirc can record your remote press and generate your own lircd.conf thanks irrecord command. Useful if you want to control an unpopular device.
Otherwise, replace with your founded lircd.conf inside the lirc folder : /etc/lirc/lircd.conf
# Restart lirc
sudo service lircd restart# Send one press to power on or power off my tv:
irsend SEND_ONCE SAMSUNG_AA59-00600A_POWER KEY_POWER# Send two press to press one of other key :
irsend SEND_ONCE SAMSUNG_AA59-00600A KEY_VOLUMEUP
In case it doesn’t work, here’s a tip to know if something is sent on your IR emitter: take your phone with the camera app. Switch off the light and move the camera in direction of the emitter : If you see the IR light, it working. IR are visible from phone camera APP thanks to camera filter :) ! You can use SEND_START to permanently send a specific key
Prepare my raspberry to Alexa with NodeJS Express server
One of the solution I found was to create a NodeJS REST Api exposed on the web. Alexa can only use public API. Otherwise, it may be possible to communicate with bluetooth but it seems to be relatively complicated.
Here is my solution :
Node/Express API expose the following REST route :
GET on https://<myHostAddress>/:command
Command is the Key exposed on the lircd.conf. I needed to check if it’s the power button or not.
Then, I used the exec lib in order to execute this bash command :
irsend SEND_ONCE <remote name in lircd.conf> <choosen commande like KEY_VOLUMEUP>
For my first increment, I used ngrok to deploy a public https server fastly. Https is required for Lambda fetch.
In a second increment, I chose to open a port on my internet box and generate an https certificate linked to a personal DNS.
To automaticly run your nodeJS server on Raspberry Startup, I recommand to use PM2 lib
npm install pm2 -g# Go to your server folder
pm2 start app.js
Adapt app.js with your server main file. It will now be launch on the Raspberry boot.
Create your first Alexa Skill Intent
You will need to use or create an Amazon account and go here (Alexa Skill Builder). Then, create a new skill with the following configuration : “custom” with “Alexa-hosted nodejs lambda” and “Start from scratch”
I chose the following sentence to launch my Skill
“Alexa, ask my television to <do something>”
The first thing to configure is “my television”. Go to invocation => Skill invocation name.
The second thing is the <do something>. For Alexa, it’s named intent. Intent is a combination of fixed words and slots types.
If we imagine that we want to program a lamp we might create a lot of intents like
Alexa, ask my lamp to do set a lightness to <lightness_command> with a color<color_name>
Alexa, ask my lamp to do the color <color_name> with lightness of <lightness_command>
Alexa, ask my lamp to do switch the lightness to <lightness_command> and move to the color<color_name>
Here, we have configured 3 intents with 2 slots type. In my case, just one slot type is needed. Go to Slot type, name it “command” and recreate the remote keys with conversational value. Example :
- KEY_POWER => ON ; switch on ; switch off ; OFF; etc..
- KEY_VOLUMEUP => louder the sound; turn up the volume; turn the volume up; etc…
Then go to Intent => Add an intent => Add a custom intent and Create some interrance including the <command> slot type we just create.
Example of intent :
to {command}
Finally, you need to save and build the model (button on the top). All your modification is on a JSON available on Intents => JSON Editor
For more help, this project reproduce this steps with screenshots
Link to your Raspberry
The final step is to modify the lambda code. Go to “Code” on the top menu.
Lambda is a serverless-cloud embedded service from AWS that was including in Alexa. It is the service used in the “code section”
Here is my minimum code :
const https = require('https')
const Alexa = require('ask-sdk-core');const IntentReflectorHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest';
},
handle(handlerInput) {const commandkey = handlerInput.requestEnvelope.request.intent.slots.command.resolutions.resolutionsPerAuthority[0].values[0].value.name;
const speakOutput = `You just do the command ${commandkey}`;
https.get(`https://my_public_api/${commandkey}`, res => {
}).on('error', err => {
console.log(err.message);
})return handlerInput.responseBuilder
.speak(speakOutput)
.getResponse();
}
};const ErrorHandler = {
canHandle() {
return true;
},
handle(handlerInput, error) {
const speakOutput = 'Mes excuses, je n\'ai pas compris la commande à transmettre à la télé';
console.log(`~~~~ Error handled: ${JSON.stringify(error)}`);return handlerInput.responseBuilder
.speak(speakOutput)
.getResponse();
}
};exports.handler = Alexa.SkillBuilders.custom()
.addRequestHandlers(IntentReflectorHandler)
.addErrorHandlers(ErrorHandler)
.lambda();
- The exports.handler show all publics functions that will be launch for. It will execute each function in an ordered stack. (Reactive programming)
- Each function has a canHandle function inside. If a function is not handled (the function inside “canHandle” return false), the second function inside (handle) is not launched, and the following main function in the export.handler is executed etc…
- Because of errorHandler has a canHandle that return true and is at the bottom on the stack in the exports.handler, it will be launched if none of other function can handle it.
- If a main function canHandle, the handle function inside it is launch. Other main function will not be executed.
- The 2nd function inside each main function is handled, receiving an handlerInput. It embedded all information you need. I console.log it to found my retrieved command and fetch it to my API.
- Each main function should return an handlerInput.responseBuilder. You can stack function like .speak(“something”).reprompt(), if you want Alexa say “something” and wait a response from the interlocutor.
- I use https.get in my case, to fetch my API, because it is natively included in nodeJS without adding libs.
Finally, you can “Deploy” the code to test it. A button “Cloudwatch logs” is also included on this section. It is an AWS managed service that log all your console.log(). /!\ Tips: for France for example, you probably need to go to an european AWS Region. Using the small arrow next the button to go to the correct AWS region.
If you’re connected to the same network that your Alexa device, the modification is directly available without extra-configuration nor reboot from your device or from the “Test” tab.
My project including my nodeJS server for Raspberry endpoint, the JSON for Alexa intents and NodeJS for the Alexa Lambda is available here :