Garmin + ELK Fun

The quality of reports on garmin connect is disappointing to say the least. Some things I wanted:

  • Daily/Weekly over time. Garmin has fixed presets. At 6 months or a year, you only get monthly counts.
  • Control the date range - Garmin has week, month, 6 month, year fixed views, and fixed buckets in said views.
  • Exclude my commute rides. It skews the averages. What is my cadence/pace on only >20km bike rides?
  • It would be fun to do.

Unfortunately Garmin charges(!) to use their API. Disappointing. The simplest method for data I found was to manually export(20 events at a time..) from garmin connect. This gave me a bunch of .csv files to work with.

Initially I tried to just use a file input on logstash with the csv filter. Unfortunately Garmin is not consistent with it's data formats in events. The biggest issue were pace/speed fields varying. "1:50 min/100 m" - swimming, "15.6" (kph) - cycling/rowing/some swimming, or "4:30" (4:30/min km) - running/walking.

I ended up writing a python parser to do the following:

  • Read conf file indicating format of csv/important fields for each activity type in the csv
  • Convert fields into usable formats for Kibana graphing
  • Generate json logging/send to Logstash 'ready to go' for ES/Kibana.

The current version of the parser is here . It's rough, but works. Imported all 377 activities from my garmin account going back 2.5 years. Including some that were initially imported from run keeper to garmin, resulting in very odd formatting.

Now that I have the initial import working, i'll add a check against ES to prevent sending a duplicate activity to ES during imports. Ideally i'll find a way to automate fetching the 'last 20' events csv daily, but worst case i'll feed the script a fresh csv once a week.

Example .csv from garmin connect:

Untitled,Lap Swimming,"Wed, Jun 10, 2015 4:44",18:51,"1,000 m",240,--,--,1:53 min/100 m,1:33 min/100 m,--,--,--,--,480,--,12,--,--,40,
Stockholm Running,Running,"Wed, Jul 15, 2015 6:43",37:20,8.07,615,28,5.0,4:37,3:18,177,194,--,--,--,84,--,--,--,--,
SATs,Treadmill Running,"Tue, Jan 26, 2016 17:41",25:01,5.51,466,0,5.0,4:32,4:20,174,194,--,--,--,81,--,--,--,--,
Stockholm Cycling,Cycling,"Thu, Jul 16, 2015 6:33",3:07,1.15,36,6,--,22.1,25.5,--,--,--,--,--,--,--,--,--,--,
Untitled,Walking,"Sat, Sep 20, 2014 8:19",23:07,2.29,165,28,--,10:35,2:25,--,--,--,--,--,--,--,--,--,--,
Untitled,Rowing,"Tue, Sep 16, 2014 15:23",--,2.60,154,--,--,14.2,--,--,--,--,--,--,--,--,--,--,--,
Untitled,Strength Training,"Tue, Jul 14, 2015 6:42",22:11,0.00,0,0,--,0.0,--,--,--,--,--,--,--,--,--,--,--,
Open Water Swim -Tri,Open Water Swimming,"Sun, Aug 23, 2015 9:00",29:41,1.65,383,--,--,3.3,--,--,--,--,--,--,--,--,--,--,--,

Configuration for the parser:

[config]
csv_order: Activity Name,Activity Type,Start,Time,Distance,Calories,Elevation Gain,Training Effect,Avg Speed(Avg Pace),Max Speed(Best Pace),Avg HR,Max HR,Avg Bike Cadence,Max Bike Cadence,sumStrokes,Avg Run Cadence,Avg Strokes,Min Strokes,Best SWOLF,Avg SWOLF

csv_integers: Calories,Elevation Gain,Avg HR,Max HR,Avg Bike Cadence,Max Bike Cadence,sumStrokes,Best SWOLF,Avg SWOLF
csv_floats: Distance,Training Effect,Avg Speed(Avg Pace),Max Speed(Best Pace)

[activities]
Cycling_fields: Activity Name,Activity Type,Start,Time,Distance,Calories,Elevation Gain,Training Effect,Avg Speed(Avg Pace),Max Speed(Best Pace),Avg HR,Max HR,Avg Bike Cadence,Max Bike Cadence,sumStrokes

Lap Swimming_fields: Activity Name,Activity Type,Start,Time,Distance,Calories,Avg Speed(Avg Pace),Max Speed(Best Pace),Avg HR,Max HR,sumStrokes,Avg Strokes,Min Strokes,Best SWOLF,Avg SWOLF

Open Water Swimming_fields: Activity Name,Activity Type,Start,Time,Distance,Calories,Avg Speed(Avg Pace),Max Speed(Best Pace),Avg HR,Max HR,sumStrokes,Avg Strokes,Min Strokes,Best SWOLF,Avg SWOLF

Strength Training_fields: Activity Name,Activity Type,Start,Time,Calories,Avg HR,Max HR

Running_fields: Activity Name,Activity Type,Start,Time,Distance,Calories,Elevation Gain,Training Effect,Avg Speed(Avg Pace),Max Speed(Best Pace),Avg HR,Max HR,Avg Run Cadence

Key things:

  • csv_order: Since you can order/choose fields in garmin, needed to specify the order they will be in.
  • csv_integers/floats: Fields from the csv that should become an integer/floats, otherwise you can't do math in Kibana/ES
  • *_fields: fields for each activity type. The csv will put '--' in each field with no data/irrelevant, to prevent extra fields/cast issues in ES I ommit the fields that aren't relevant to each activity.

I will add host/port for the udp logger and some other parameters when I go back to clean up the initial script.

The relevant logstash configuration was:

input {
  udp {
    port => 6400
    codec => "oldlogstashjson"
    type => "garmin"
    workers => 5
  } 
}
output {
  if [type] == "garmin" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "garmin-%{+YYYY.MM.dd}"
      template => "/opt/logstash-custom/templates/garmin.json"
      template_name => "garmin"
    }
  } 
}

The result is what I wanted. I can exclude all of the < 15km bike rides from my overall stats, specify my own ranges/buckets, etc:

Also created views for each activity.

Cycling:

Running:

Swimming:

more ...

Client/Server Python Scripts

I started this site with the intent to do a weekly post, however I've found myself in Sweden for the past 8+ weeks for work.(There are worse places to spend your summer;). Sorry for lack of updates;x

There are 1000 different ways to triage network issues, here is one tool. A simple python server listening on a particular port, and printing out the details of what the client sent and a client to send said datas.

Server:

[jmorgan@arch-dopey ~]$ cat server.py  
#!/usr/bin/python2

import socket  
from datetime import datetime

s = socket.socket()  
host = socket.gethostname()  
port = 1337  
s.bind((host, port))

s.listen(5)  
while True:  
c, addr = s.accept()  
sockS= c.recv(3000).strip('\n')  
if sockS:  
logF = open('rLog', 'a')  
dt= str(datetime.now())  
wee="%s %s \n" % (sockS, dt)  
logF.write(wee)  
logF.close()  
sockS=None  
c.close()

Client:

[jmorgan@arch-dopey ~]$ cat tcpSend.py  
#!/usr/bin/python2  
import sys  
import socket  
from datetime import datetime  
ip = sys.argv[1]  
port = int(sys.argv[2])  
mCount = int(sys.argv[3])  
print "%s:%s %s packets" % (ip, port,mCount)  
count=0  
while count \<= mCount:  
logF = open('sLog', 'a')  
dt= str(datetime.now())  
msg="%s %s" % (str(count), dt)  
try:  
sock = socket.socket(socket.AF_INET, # Internet  
socket.SOCK_STREAM)  
sock.connect((ip, port))  
sock.sendto(msg, (ip, port))  
logF.write("Success: %s\n" % (msg))  
sock.close()  
except:  
dtE= str(datetime.now())  
logF.write("Fail:%s Start: %s End: %s\n" % (str(count), dt, dtE))  
count +=1  
logF.close()  
[jmorgan@arch-dopey ~]$

Usage:

#One terminal  
[jmorgan@arch-dopey ~]$ sudo ./server.py  
#Another terminal  
[jmorgan@arch-dopey ~]$ ./tcpSend.py localhost 1337 3  
localhost:1337 3 packets  
[jmorgan@arch-dopey ~]$ cat sLog  
Success: 0 2013-07-22 21:17:12.124353  
Success: 1 2013-07-22 21:17:12.124862  
Success: 2 2013-07-22 21:17:12.125124  
Success: 3 2013-07-22 21:17:12.125321  
[jmorgan@arch-dopey ~]$ cat rLog  
0 2013-07-22 21:17:12.124353 2013-07-22 21:17:12.124854  
1 2013-07-22 21:17:12.124862 2013-07-22 21:17:12.125148  
2 2013-07-22 21:17:12.125124 2013-07-22 21:17:12.125335  
3 2013-07-22 21:17:12.125321 2013-07-22 21:17:12.125485  
[jmorgan@arch-dopey ~]$

The client sends the timestamp when it sends the packet, and the server prints that along with when it received the packet. Works well to compare latency, dropped packets, etc over a network. Nothing fancy, just a quick and dirty script written under fire to triage an issue. In my situation, I would have the VIP quit responding at times so the Fail line let me know how often that happened for say 10,000 or 100,000 packets, as well the amount of time it took to send that number of packets between zones.

more ...

onConnect updated to handle additional trigger.

I expanded onConnect, you can check out the current version here. I plan to add more triggers, but for now I've added a File trigger.

The configuration file now supports a 'File:$name' section, that you then define a fine to monitor, expected contents of the file, and whether you want to run a command when that file does or does not match the expected value. Only acting when the value changes, though.

But why you might ask? Well, after reading Bumblebee on Arch Wiki I wanted to toggle Nvidia vs Intel on my XPS 15 whenever I moved to/from battery or A/C power. I was initially going to hardcode onConnect to handle this, but that seemed limiting.

Instead I have a framework for other triggers, and simply had to point onConnect to monitor /sys/class/power_supply/ACAD/online file which will change depending on my power source.

Here it is changing from AC power to battery, you can also see the increase in battery time using acpi.

#Unplugging the power cable  
DEBUG:root:State changed for File:onACPower  
DEBUG:root:New state matches configuration File:onACPower, running
actionFalse Cmd echo OFF > /proc/acpi/bbswitch  
DEBUG:root:Found File:onBatteryPower  
DEBUG:root:Loading Configuration for File:onBatteryPower  
DEBUG:root:Looking up prior state for File:onBatteryPower  
DEBUG:root:Prior state for File:onBatteryPower is 1  
DEBUG:root:State changed for File:onBatteryPower  
DEBUG:root:New state matches configuration File:onBatteryPower, running
actionTrue Cmd echo OFF > /proc/acpi/bbswitch  
#watching acpi/bbswitch status in another window. Before/After the
configuration change above  
#plugged in AC power with bbswitch 'ON'  
[jpyth@arch-jpyth jpyth]# cat /proc/acpi/bbswitch ; acpi  
0000:01:00.0 ON  
Battery 0: Unknown, 99%  
#unplugged, before the next run occured(13 second window).  
[jpyth@arch-jpyth jpyth]# cat /proc/acpi/bbswitch ; acpi  
0000:01:00.0 ON  
Battery 0: Discharging, 96%, 02:19:09 remaining  
#after the above run competes. bbswitch is 'OFF', and battery time
increased.  
[jpyth@arch-jpyth jpyth]# cat /proc/acpi/bbswitch ; acpi  
0000:01:00.0 OFF  
Battery 0: Discharging, 96%, 03:37:31 remaining  
#behaves exactly as expected when plugging power back in. The above is
a set of redundant configs, only to show the True/False case possible
with the configuration. I'd normally only have one.

I can now tie in the g13 daemon quite easily, and monitor the log file for error to stop the daemon upon disconnect and also start the daemon when the device is connected automatically.

more ...

onConnect Monitor/Adjust System Configuration Based on Network

I initially just wanted to adjust my sound depending on whether I was at work, or at home on my laptop. I decided to try and write something I could plug additional things into over time, to automate any task I normally do when moving between known networks.

This is also the first time I've shared something I wrote on github. I'm sure I broke every rule in the book on best practices, as I am self taught/not a developer. Feel free to check it out though, OnConnect GitHub. I welcome any feedback.:)

I'm not sure I like using the MAC for this, I debated using ESSID and may very well go back to that. I don't like the case of finding more than one MAC that might have a config found(work), but I also wanted to read/parse from a file though not a command+stdout parsing. I couldn't find a reliable method for obtaining the ESSID from a file, yet.

Some example of it working, for me atleast:)
:::bash
#no network, no macs found, default config
[jpyth@arch-jpyth ~]$ ip addr | grep -A1 wlp8s0
3: wlp8s0: \ mtu 1500 qdisc mq state DOWN qlen 1000
link/ether c4:85:08:fc:91:85 brd ff:ff:ff:ff:ff:ff
[jpyth@arch-jpyth ~]$
[jpyth@arch-jpyth ~]$ sudo systemctl status onConnect
onConnect.service - onConnect
Loaded: loaded (/usr/lib/systemd/system/onConnect.service; enabled)
Active: active (running) since Fri 2013-03-29 10:29:42 PDT; 2s ago
Process: 7886 ExecStop=/usr/lib/systemd/scripts/onConnectStop (code=killed, signal=TERM)
Process: 7898 ExecStart=/usr/lib/systemd/scripts/onConnectStart (code=exited, status=0/SUCCESS)
Main PID: 7905 (onConnect)
CGroup: name=systemd:/system/onConnect.service
└─7905 /usr/bin/python2 /usr/bin/onConnect

Mar 29 10:29:42 arch-jpyth systemd[1]: Starting onConnect...  
Mar 29 10:29:42 arch-jpyth systemd[1]: Started onConnect.  
[jpyth@arch-jpyth ~]$ tail /var/log/onConnect/onConnect.log  
INFO:root:Config found for Default  
DEBUG:root:\<function doVolume at 0x7fd4c1dd30c8>  
INFO:root:command amixer cset iface=MIXER,name='Master Playback Volume'
60% for Default  
DEBUG:root:\<function doVpn at 0x7fd4c1dd3140>  
INFO:root:command systemctl stop vpn for Default  
[jpyth@arch-jpyth ~]$  
#vpn being stopped:  
[jpyth@arch-jpyth ~]$ sudo systemctl status vpn  
vpn.service - OpenVpn For Work  
Loaded: loaded (/usr/lib/systemd/system/vpn.service; disabled)  
Active: inactive (dead)

Mar 29 10:29:42 arch-jpyth systemd[1]: Stopping OpenVpn For Work...  
Mar 29 10:29:43 arch-jpyth systemd[1]: Stopped OpenVpn For Work.  
[jpyth@arch-jpyth ~]$

#Now starting/joining wireless network  
[jpyth@arch-jpyth ~]$ sudo systemctl start wireless  
[jpyth@arch-jpyth ~]$ systemctl status wireless  
wireless.service - Wireless  
Loaded: loaded (/usr/lib/systemd/system/wireless.service; enabled)  
Active: active (running) since Fri 2013-03-29 10:34:37 PDT; 5s ago  
Process: 7692 ExecStop=/usr/lib/systemd/scripts/wirelessStop
(code=exited, status=0/SUCCESS)  
Process: 7942 ExecStart=/usr/lib/systemd/scripts/wirelessStart
(code=exited, status=0/SUCCESS)  
Main PID: 7948 (wpa_supplicant)  
CGroup: name=systemd:/system/wireless.service  
└─7948 /usr/sbin/wpa_supplicant -f
/var/log/wpa_supplicant/wpa_supplicant.log -B -i wlp8s0
-P/var/run/wireless.pid -c /etc/wpa_supplicant.conf -d  
[jpyth@arch-jpyth ~]$

#onConnect log file  
INFO:root:['FF:FF:FF:FF:FF:FF', 'Default']  
INFO:root:Config found for FF:FF:FF:FF:FF:FF  
['description', 'location', 'volume', 'vpn']  
DEBUG:root:\<function doVolume at 0x7fd4c1dd30c8>  
INFO:root:command amixer cset iface=MIXER,name='Master Playback Volume'
85% for FF:FF:FF:FF:FF:FF  
DEBUG:root:\<function doVpn at 0x7fd4c1dd3140>  
INFO:root:command systemctl start vpn for FF:FF:FF:FF:FF:FF

#vpn started automatically when joined:  
[jpyth@arch-jpyth ~]$ sudo systemctl status vpn  
vpn.service - OpenVpn For Work  
Loaded: loaded (/usr/lib/systemd/system/vpn.service; disabled)  
Active: active (running) since Fri 2013-03-29 10:34:57 PDT; 1min 22s
ago  
Process: 7978 ExecStart=/usr/lib/systemd/scripts/vpnStart (code=exited,
status=0/SUCCESS)  
Main PID: 7986 (screen)  
CGroup: name=systemd:/system/vpn.service

Mar 29 10:34:57 arch-jpyth systemd[1]: Starting OpenVpn For Work...  
Mar 29 10:34:57 arch-jpyth systemd[1]: Started OpenVpn For Work.  
[jpyth@arch-jpyth ~]$

I think it'll do what I need=D

more ...