Good to know: Unofficial Rivian Prometheus Exporter
Prometheus provides custom collectors to generate metrics to include in Observability platforms, you can write the exporters in supported client libraries.
I have written the Rivian Prometheus Exporter using the python client. You can deploy it using the Kubernetes resources below as well.
In the exporter, we have only one gauge (battery %) and couple of other counter metrics exposed.
Steps
Code Up Rivian Exporter in Python for Prometheus.
Dockerize the exporter, build container.
Deploy on Kubernetes
Scrape The Whip Metrics
Inspect the Data Outcome
Exporter
/src/rivian_exporter.py
import time
from prometheus_client.core import GaugeMetricFamily, REGISTRY, CounterMetricFamily
from prometheus_client import start_http_server
import rivian_api as rivian
import os
import json
import random
class RivianExporter(object):
def __init__(self):
self.rivian = rivian.Rivian()
response = self.rivian.login(
os.environ['RIVIAN_USERNAME'],
os.environ['RIVIAN_PASSWORD']
)
# owner info, grab rivian vehicleid
owner = self.rivian.get_user_information()
self.rivianid = owner['data']['currentUser']['vehicles'][0]['id']
print(f'Rivian: {self.rivianid}')
def collect(self):
# status info
whipstatus = self.rivian.get_vehicle_state(self.rivianid)
# battery level - batteryLevel
# distance to empty - distanceToEmpty
# gear status - gearStatus
batterylevel = whipstatus['data']['vehicleState']['batteryLevel']['value']
distancetoempty = whipstatus['data']['vehicleState']['distanceToEmpty']['value']
gearstatus = whipstatus['data']['vehicleState']['gearStatus']['value']
# Metric Translations
if gearstatus == 'park':
gearstatus = 0
else:
gearstatus = 1
a = GaugeMetricFamily("rivian_battery_level", "% of Battery left", labels=['whip'])
a.add_metric([self.rivianid], batterylevel)
yield a
b = CounterMetricFamily("rivian_battery_distance_empty", 'Miles Left', labels=['whip'])
b.add_metric([self.rivianid], distancetoempty)
yield b
c = CounterMetricFamily("rivian_gear_status", '0=park, otherwise rolling...', labels=['whip'])
c.add_metric([self.rivianid], gearstatus)
yield c
if __name__ == '__main__':
start_http_server(8000)
REGISTRY.register(RivianExporter())
while True:
REGISTRY.collect()
# lets not piss off the Site Reliability Teams at Rivian
time.sleep(90)
I am exposing the static gauge and counter metrics. But you can just modify it with your logic from another system.
In second step we will build our Docker container.
Container
Container blurb.
FROM python:3.8
ADD src /src
RUN pip install prometheus_client
RUN pip install plotly
RUN pip install polyline
RUN pip install python-dateutil
RUN pip install python-dotenv
RUN pip install requests
RUN pip install geopy
WORKDIR /src
ENV PYTHONPATH '/src/'
ENV RIVIAN_PASSWORD 'secret'
ENV RIVIAN_USERNAME 'k8s'
CMD ["python" , "/src/rivian_exporter.py"]
Create a namespace and add your Rivian Credentials as a secret:
kubectl create ns rivian
kubectl create secret generic rivian-user-pass -n rivian \
--from-literal=rivian_username='ron.sweeney+api@hotmale.com' \
--from-literal=rivian_password='12345' # same as your luggage
Apply the Deployment and LoadBalancer (or NodePort)
kubectl apply -f deploy/* -n rivian
If everything worked out, we should see a pair of fonzies running on our cluster for the deployment and the MetalLB LoadBalancer Service.
Data Inspection
Now hit the MetalLB Load balancer on port 5000 and bask in the glory of the exported metrics.
Though, awesome, most arent impressed by metrics endpoints, but get set to get even more unimpressed looking at a round trip of errands using prometheus to explore the data.
You need to define a simple prometheus.yml
# Sample config for Prometheus.
global:
scrape_interval: 30s # By default, scrape targets every 15 seconds.
evaluation_interval: 30s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'deezwatts'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'deezwatts'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 30s
scrape_timeout: 30s
static_configs:
- targets: ['192.168.1.92:5000']
metrics_path: /
Then run it and hit http://localhost:9090 to explore the data time series in Prometheus.
docker run \
-p 9090:9090 \
-v $PWD/prom/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
Lets inspect the data running a couple of errands in the Whip.
So its like 3:PM or so EST, and I needed to go to the store, along the way, I stopped at Taco Bell, then drove home to Gun Lake. Total trip was about 20 miles or so, with 2 stops.
Gun Lake -> Taco Bell -> Grocery Store -> Gun Lake
Distance to Empty
I hope you appreciate the simplicity here, but prometheus told the story of my errands run... I started out with a full charge with an extended setting, drove 10 miles, did two stops very close to each other, then drove it back 10 miles. You can even see where I did some drive way shuffling before I plugged it in to charge to Standard.
Gear Status
This one wont win any visualization awards, but if you recall the metric we did was 0 = P, and anything else is 1 = in motion (whether backwards or forwards). You can clearly see the 3 errands and the park shuffling in the gear status as well.