<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://github.crookster.org/feed.xml" rel="self" type="application/atom+xml" /><link href="https://github.crookster.org/" rel="alternate" type="text/html" /><updated>2026-02-15T00:10:22+00:00</updated><id>https://github.crookster.org/feed.xml</id><title type="html">David Crook</title><subtitle>NoCo Hacker Dev Blog</subtitle><entry><title type="html">Very Old Tenvis TH611 Webcam Working With Frigate - Even PTZ</title><link href="https://github.crookster.org/very-old-tenvis-th611-webcam-working-with-frigate-even-ptc/" rel="alternate" type="text/html" title="Very Old Tenvis TH611 Webcam Working With Frigate - Even PTZ" /><published>2026-02-14T00:00:00+00:00</published><updated>2026-02-14T00:00:00+00:00</updated><id>https://github.crookster.org/very-old-tenvis-th611-webcam-working-with-frigate---even-ptc</id><content type="html" xml:base="https://github.crookster.org/very-old-tenvis-th611-webcam-working-with-frigate-even-ptc/"><![CDATA[<p>I had a two-pack of Tenvis webcams that included wall mounts and hardware. The webcam models are TH611 and are “PTZ”. They were originally purchased in 2019. It was a slog, but I finally got them configured for Frigate.</p>

<h2 id="tenvis-th611-models">Tenvis TH611 models</h2>

<p>The year is 2026. And Tenvis apparently still offers a “TH611” model. It is a version 4 of the hardware according to their naming, so it’s entirely different platform than what I have.</p>

<p>The newer model versions have something called a <em>“World Wide Web”</em> interface where a configuration page can be accessed in something called a <em>“Web Browser”</em>, but the models I have do not offer this technology. Still, people seem to have reported older models like mine working in Frigate or in Home Assistant, but the accuracy of this evidence was piecemeal.</p>

<h2 id="unboxing">“Unboxing”</h2>

<p>I had put them back them in original packaging when I last took them down. They were no longer connected to the Tenvis cloud (although at one time I had tried this to get remote access to the camera stream that this offered). Now they (still) have a password-protected local <code class="language-plaintext highlighter-rouge">admin</code> account.</p>

<p>More on that later. Anyway, these webcams have Ethernet (though not PoE) and Wifi networking, and a 5V barrel non-USB connector power supply. I plugged into wired Ethernet and plugged in power and powered on.</p>

<p><img src="/images/tenvisty-ipad-macos-view.png" alt="Tenvisty App" title="Main screen in newly-installed Tenvisty app, captured in macOS of iPad app version." /></p>

<p>In their iOS app named “Tenvisty”, it is possible to configure new or existing cameras. On my iphone I didn’t have it installed anymore, so I re-downloaded it. Crazily, when I launched the <strong>Tenvisty</strong> app, the two cameras were right there all ready to go, as if no time had passed. “Check New Firmware” revealed I am already using the latest version (<code class="language-plaintext highlighter-rouge">1.3.15.29</code> ). Haha.</p>

<p>The Pan-Tilt-“Zoom” (spoiler alert: these models do not have a zoom function) works in the iOS app. They are marketed as 720p (1280x720) cameras.</p>

<h3 id="googling-for-home-assistant-support-and-for-frigate-support">Googling for Home Assistant support and for Frigate support</h3>

<p>On the Home Assistant forums there is a post from December 2019: <a href="https://community.home-assistant.io/t/tenvis-th661-ip-camera-anybody-got-this-working/157063/13">Tenvis TH661 IP Camera, Anybody got this working?</a>. OK, promising.</p>

<p>Someone even has documented the PTZ API commands that they are using with theirs. But there is talk of web page interface and really new versions of firmware and so it is unlikely that many of the responses apply. MY CAMERAS DO NOT HAVE THE HTTP PORT :80 OPEN ON THE DEVICES. However, they do have port <code class="language-plaintext highlighter-rouge">:554</code> (<code class="language-plaintext highlighter-rouge">rtsp</code>) open, which means that they probably can stream RTSP. And indeed, they do. Just need the <code class="language-plaintext highlighter-rouge">admin</code> account password, and the proper URL and then they should work, right?</p>

<p>For Frigate, there are two main protocols for interacting with webcams: <code class="language-plaintext highlighter-rouge">RTSP</code> for streaming videos, and <code class="language-plaintext highlighter-rouge">ONVIF</code> which used to reveal capabilities and control them (think: panning function).</p>

<p>To start with, let’s try to stream a video in frigate. There are two profiles:</p>
<ul>
  <li>1280x720: <code class="language-plaintext highlighter-rouge">rtsp</code> url path <code class="language-plaintext highlighter-rouge">/11</code></li>
  <li>640x360: <code class="language-plaintext highlighter-rouge">rtsp</code> url path <code class="language-plaintext highlighter-rouge">/12</code></li>
</ul>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">go2rtc</span><span class="pi">:</span>
  <span class="na">streams</span><span class="pi">:</span>
    <span class="na">tenvis_great_room</span><span class="pi">:</span> <span class="s2">"</span><span class="s">rtsp://admin:admin@192.168.10.75:554/11"</span>
    <span class="na">tenvis_great_room_sub</span><span class="pi">:</span> <span class="s2">"</span><span class="s">rtsp://admin:admin@192.168.10.75:554/12"</span>
    <span class="s">...</span>

<span class="na">cameras</span><span class="pi">:</span>

  <span class="na">tenvis_great_room</span><span class="pi">:</span>
    <span class="na">enabled</span><span class="pi">:</span> <span class="no">true</span>
    <span class="na">ffmpeg</span><span class="pi">:</span>
      <span class="na">inputs</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">path</span><span class="pi">:</span> <span class="s">rtsp://127.0.0.1:8554/tenvis_great_room_sub</span>
          <span class="na">input_args</span><span class="pi">:</span> <span class="s">preset-rtsp-restream</span>
          <span class="na">roles</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="s">detect</span>
        <span class="pi">-</span> <span class="na">path</span><span class="pi">:</span> <span class="s">rtsp://127.0.0.1:8554/tenvis_great_room</span>
          <span class="na">input_args</span><span class="pi">:</span> <span class="s">preset-rtsp-restream</span>
          <span class="na">roles</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="s">record</span>
    <span class="na">detect</span><span class="pi">:</span>
      <span class="na">enabled</span><span class="pi">:</span> <span class="no">false</span>

</code></pre></div></div>

<p>That works great. It took me QUITE a while of fiddling and googling (and had to find my <code class="language-plaintext highlighter-rouge">admin</code> password in a copy of <code class="language-plaintext highlighter-rouge">1Password 7</code> that is in read-only mode since I moved away from them some time ago) but many sources had those two RTSP protocol and URL paths correct.</p>

<p>But what about ONVIF support? Without it (or something like it), the PTZ capabilities will be unavailable in Frigate and Home Assistant. ONVIF is typically served on port :80 or port :443 according to the massive majority of webcam implementations out there. Which makes sense, since ONVIF a SOAP web service. But I didn’t know that or anything resembling that at the time I was starting on this. And most posts resulting from search engine queries were <strong>just complaints about that they couldn’t get PTZ working from third-party software</strong> with this camera model.</p>

<p>But the app certainly is able to control the webcams. Are they using a proprietary protocol? Again, web searches didn’t leave any cookie crumbs out there for me to follow in this regard. So I tried a port-scan of a camera to see if any ports other than port <code class="language-plaintext highlighter-rouge">:554</code> were open (e.g. Tapo webcams serve ONVIF on port <code class="language-plaintext highlighter-rouge">:2020</code>) First I tried <code class="language-plaintext highlighter-rouge">nmap</code> to do an exhaustive scan. While I was waiting for that to do its supposed magic, I also tried <code class="language-plaintext highlighter-rouge">netcat</code> (<code class="language-plaintext highlighter-rouge">nmap</code> scan never returned any ports, and I eventually quit it).</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">&gt;</span> nc <span class="nt">-zv</span> 192.168.10.74 1-1000 |&amp; <span class="nb">grep</span> <span class="nt">-v</span> <span class="s2">"Connection refused"</span>
Connection to 192.168.50.74 port 554 <span class="o">[</span>tcp/rtsp] succeeded!

<span class="o">&gt;</span> nc <span class="nt">-zv</span> 192.168.10.74 1001-10200 |&amp; <span class="nb">grep</span> <span class="nt">-v</span> <span class="s2">"Connection refused"</span>
Connection to 192.168.50.74 port 8999 <span class="o">[</span>tcp/bctp] succeeded!
</code></pre></div></div>

<p>You can have <code class="language-plaintext highlighter-rouge">netcat</code> try to connect to a host on a range of ports, and it will report its attempts. And wouldn’t you know it, I finally had a clue to follow. It listens on port <code class="language-plaintext highlighter-rouge">:554</code> (but we already knew that!) It also listens on a port <code class="language-plaintext highlighter-rouge">:8999</code> and it seems that port being open could mean that a web page or something is there… Let’s try some basic connections.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl -v http://admin:admin@192.168.10.74:8999/
*   Trying 192.168.10.74:8999...
* Established connection to 192.168.10.74 (192.168.10.74 port 8999) from 192.168.10.8 port 63020
* using HTTP/1.x
* Server auth using Basic with user 'admin'
&gt; GET / HTTP/1.1
&gt; Host: 192.168.50.74:8999
&gt; Authorization: Basic YWRtaW46YWRtaW4=
&gt; User-Agent: curl/8.18.0
&gt; Accept: */*
&gt;
* Request completely sent off
* Empty reply from server
* shutting down connection #0
curl: (52) Empty reply from server

curl -v http://admin:admin@192.168.10.74:8999/onvif/device_service
*   Trying 192.168.10.74:8999...
* Established connection to 192.168.10.74 (192.168.10.74 port 8999) from 192.168.10.8 port 62994
* using HTTP/1.x
* Server auth using Basic with user 'admin'
&gt; GET /onvif/device_service HTTP/1.1
&gt; Host: 192.168.10.74:8999
&gt; Authorization: Basic YWRtaW46YWRtaW4=
&gt; User-Agent: curl/8.18.0
&gt; Accept: */*
&gt;
* Request completely sent off
* Empty reply from server
* shutting down connection #0
curl: (52) Empty reply from server
</code></pre></div></div>

<p>Bummer. Does not seem to a normal web server. But I don’t know enough on how ONVIF is supposed to work to rule it out. So now I am armed with this tidbit of knowledge that port <code class="language-plaintext highlighter-rouge">:8999</code> is doing something in these devices. and eventually the Google and Github databases came through for me.</p>

<p>In a ZoneMinder (another NVR software package) github bug report: <a href="https://github.com/ZoneMinder/zoneminder/issues/1346">ONVIF probing camera not detected </a>, TH661 and 8999 were mentioned together. It was bug report dealing with a perl script <code class="language-plaintext highlighter-rouge">zmonvif-probe.pl</code> that is maintained within ZoneMinder. This script is used to probe network/networked devices and reveal information about them. Cool! So I put together that maybe I could run this script targeting my TH611.</p>

<p>There is a docker image – well, I was working from an Apple Silicon Mac, so needed to source an arm64 docker image– that can be used to run a full ZoneMinder install, that interesting <code class="language-plaintext highlighter-rouge">zmonvif-probe.pl</code> script included therein.</p>

<p>I have been using OrbStack container app as a replacement for official Docker Desktop on my macOS host for a while. It turns out, OrbStack CAN USE <code class="language-plaintext highlighter-rouge">-net=host</code> mode on macOS! which means that you can ignore trying to do convoluted networking stack workarounds to host and run container services as if they’re listening on the native host ports. Anyway, here’s was I got to work to run that diagnostic script on macOS:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># this includes an ARM64 build that the official ZoneMinder does not</span>
docker run <span class="nt">--net</span><span class="o">=</span>host klutchell/zoneminder

<span class="c"># now, in another terminal session</span>
docker ps
CONTAINER ID   IMAGE
105014d76de7   ...

<span class="c"># using the perl script to try to read profiles as if it is running ONVIF there</span>
docker <span class="nb">exec</span> <span class="nt">-it</span> 105014d76de7  /usr/bin/zmonvif-probe.pl <span class="nt">-v</span> profiles http://192.168.10.74:8999/onvif/device_service 1.1 admin admin
Received message:
&lt;GetProfilesResponse <span class="nv">xmlns</span><span class="o">=</span><span class="s2">"http://www.onvif.org/ver10/media/wsdl"</span><span class="o">&gt;</span>&lt;Profiles <span class="nv">token</span><span class="o">=</span><span class="s2">"PROFILE_000"</span> <span class="nv">fixed</span><span class="o">=</span><span class="s2">"true"</span> ...
PROFILE_000, PROFILE_000, H264, 1280, 720, 15, rtsp://192.168.10.74:554/o0_3781
PROFILE_001, PROFILE_001, H264, 640, 360, 15, rtsp://192.168.10.74:554/o1_3782

<span class="c"># this one errors, but the camera returns a whole SOAP message describing the PTZ capabilities</span>
docker <span class="nb">exec</span> <span class="nt">-it</span> 105014d76de7  /usr/bin/zmonvif-probe.pl <span class="nt">-v</span> move http://192.168.10.74:8999/onvif/device_service 1.2 admin  admin
<span class="o">&gt;&gt;&gt;</span> error message here ignored <span class="o">&lt;&lt;&lt;</span>
&lt;?xml <span class="nv">version</span><span class="o">=</span><span class="s2">"1.0"</span> <span class="nv">encoding</span><span class="o">=</span><span class="s2">"UTF-8"</span>?&gt;
&lt;soap:Envelope xmlns:soap<span class="o">=</span><span class="s2">"http://www.w3.org/2003/05/soap-envelope"</span>
...
&lt;soap:Body&gt;
    &lt;tptz:GetNodesResponse&gt;
...
   &lt;/tptz:GetNodesResponse&gt;
&lt;/soap:Body&gt; &lt;/soap:Envelope&gt;
</code></pre></div></div>

<p>Holy Cow! So it <strong>IS</strong> running ONVIF interface on port <code class="language-plaintext highlighter-rouge">:8999</code>.</p>

<h2 id="add-onvif-section-to-frigate-config">Add ONVIF section to Frigate config</h2>

<p>In our Frigate <code class="language-plaintext highlighter-rouge">config.yaml</code>, we can now put an ONVIF section that lets Frigate know it can talk to this camera using ONVIF protocol.  This is the same as above, with a <code class="language-plaintext highlighter-rouge">onvif:</code> section added.</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">cameras</span><span class="pi">:</span>

  <span class="na">tenvis_great_room</span><span class="pi">:</span>
    <span class="na">enabled</span><span class="pi">:</span> <span class="no">true</span>
    <span class="na">ffmpeg</span><span class="pi">:</span>
      <span class="na">inputs</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">path</span><span class="pi">:</span> <span class="s">rtsp://127.0.0.1:8554/tenvis_great_room_sub</span>
          <span class="na">input_args</span><span class="pi">:</span> <span class="s">preset-rtsp-restream</span>
          <span class="na">roles</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="s">detect</span>
        <span class="pi">-</span> <span class="na">path</span><span class="pi">:</span> <span class="s">rtsp://127.0.0.1:8554/tenvis_great_room</span>
          <span class="na">input_args</span><span class="pi">:</span> <span class="s">preset-rtsp-restream</span>
          <span class="na">roles</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="s">record</span>
    <span class="na">detect</span><span class="pi">:</span>
      <span class="na">enabled</span><span class="pi">:</span> <span class="no">false</span>

    <span class="na">onvif</span><span class="pi">:</span>
      <span class="na">host</span><span class="pi">:</span> <span class="s">192.168.10.75</span>
      <span class="na">port</span><span class="pi">:</span> <span class="m">8999</span>  <span class="c1"># tenvis quirk</span>
      <span class="na">user</span><span class="pi">:</span> <span class="s">admin</span>
      <span class="na">password</span><span class="pi">:</span> <span class="s2">"</span><span class="s">admin"</span>

</code></pre></div></div>

<p>And in the Frigate view for this camera, you can see the direction arrow buttons overlayed on the image. And they function!</p>

<p><img src="/images/frigate-tenvis-th611-ptc.png" alt="Screen grab of Frigate with PTZ camera support" /></p>]]></content><author><name></name></author><summary type="html"><![CDATA[I had a two-pack of Tenvis webcams that included wall mounts and hardware. The webcam models are TH611 and are “PTZ”. They were originally purchased in 2019. It was a slog, but I finally got them configured for Frigate.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/frigate-tenvis-th611-ptc.png" /><media:content medium="image" url="https://github.crookster.org/images/frigate-tenvis-th611-ptc.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Fancy Home Assistant Weather Cards and Dashboard</title><link href="https://github.crookster.org/fancy-home-assistant-weather-cards-and-dashboard/" rel="alternate" type="text/html" title="Fancy Home Assistant Weather Cards and Dashboard" /><published>2026-01-09T00:00:00+00:00</published><updated>2026-01-09T00:00:00+00:00</updated><id>https://github.crookster.org/fancy-home-assistant-weather-cards-and-dashboard</id><content type="html" xml:base="https://github.crookster.org/fancy-home-assistant-weather-cards-and-dashboard/"><![CDATA[<p>I recently got a local outdoor weather station and used it a reason to add more weather data to my Home Assistant setup.</p>

<p><img src="/images/hass-weather-dashboard.png" alt=" Weather dashboard" title="Custom Weather Dashboard including live data, forecast and conditions map." /></p>

<h2 id="ecowitt-ws90-and-gw3000">EcoWitt WS90 and GW3000</h2>

<p>I ordered an EcoWitt “Wittboy” kit during the recent holiday deals. It consists of an outdoor solar-powered weather sensor array, and a networked indoor hub. It is the <a href="https://shop.ecowitt.com/collections/weather-station/products/ecowitt-gw3001-gw3011">US version</a>, which uses 915 MHz telemetry.</p>

<p>I created a 3D printed “adapter” for fixing onto a birdhouse pole, making available on Printables: <a href="https://www.printables.com/model/1485967-ecowitt-wittboy-ws90-weather-station-sensor-array">Ecowitt WittBoy WS90 Weather Station Sensor Array Birdhouse Pole Stand Adapter</a>.</p>

<p><img src="/images/ecowitt-live.png" alt="live data" /></p>

<p>The way these weather stations work is that they periodically transmit sensor readings and then the hub receives these broadcasts and logs the data according to how it has been configured.</p>

<h2 id="home-assistant-basics">Home Assistant Basics</h2>

<p>I use Home Assistant OS on a Raspberry Pi 5. I have been using it since even before the Pi 5 was officially supported, and run it on an NVMe SSD drive. It comes with weather forecast support. There seems to be all manner of weather dashboard integrations, but I came across some now-lost Reddit threads which rolled their own weather dashboard views, and I ended up adopting/adapting this approach.</p>

<p>I’ve link to a version of my Card and Dashboard views below.</p>

<h2 id="hass-integrations">Hass Integrations</h2>

<p>There were numerous integrations used to achieve the deshboard views. They are all available via HACS if they are not already included in the officially included ones.</p>

<h3 id="hass-integrations---data">Hass Integrations - Data</h3>

<p>Data</p>

<ul>
  <li>Ecowitt official - local API webhook</li>
  <li>Ecowitt</li>
  <li>Pirate Weather</li>
</ul>

<h3 id="hass-integrations---views">Hass Integrations - Views</h3>

<p>For Charts</p>

<ul>
  <li>Apex Charts</li>
  <li>Lovelace Layout Card</li>
  <li>Plotly Graph Card</li>
</ul>

<h4 id="ecowitt-weather-station">Ecowitt weather station</h4>

<p>There are two options that both work: the “Official Integration” which is not the same as the listed Ecowitt integration.</p>

<ul>
  <li>“Official Integration” <a href="https://github.com/Ecowitt/ha-ecowitt-iot">Ecowitt/ha-ecowitt-iot: This integration uses the locally available http APIs to obtain data from the supported devices inside the local network.</a></li>
  <li>Listed in Home Assistant directory: <a href="https://www.home-assistant.io/integrations/ecowitt/">Ecowitt - Home Assistant</a></li>
</ul>

<p>The former “Official” Ecowitt integration is a local-only API that will create a webhook with its integration. Part of setting it up means configuring the webhook key / URL in the network hub settings.</p>

<p><img src="/images/ecowitt-local-api.png" alt="hub config" /></p>

<p>The latter one using the built-in cloud data logging from the Ecowitt hub (if you have it enabled). Their cloud service keeps a history of readings. I switched from this to the local-only integration after I installed a micro-SD in my network hub.</p>

<p>They both expose the readings as entities (some slight variations in naming) for the device.</p>

<h4 id="pirate-weather">Pirate Weather</h4>

<p>Pirate Weather is a weather forecast system with an API similar to the no-longer-available Dark Sky one.</p>

<ul>
  <li><a href="https://docs.pirateweather.net/en/latest/ha/">Home Assistant Integration Documentation - Pirate Weather</a></li>
</ul>

<p>I use it to include weather forecasts. There are all sorts of weather cards and data sources for forecasting. Plenty of easy experimentation to find something you like.</p>

<h4 id="apex-charts">Apex Charts</h4>

<p><a href="https://github.com/RomRider/apexcharts-card">RomRider/apexcharts-card: 📈 A Lovelace card to display advanced graphs and charts based on ApexChartsJS for Home Assistant</a></p>

<h4 id="lovelace-layout-card">Lovelace Layout Card</h4>

<p><a href="https://github.com/thomasloven/lovelace-layout-card">thomasloven/lovelace-layout-card: 🔹 Get more control over the placement of lovelace cards.</a></p>

<h4 id="plotly-graph-card">Plotly Graph Card</h4>

<p><a href="https://github.com/dbuezas/lovelace-plotly-graph-card">dbuezas/lovelace-plotly-graph-card: Highly customisable Lovelace card to plot interactive graphs. Brings scrolling, zooming, and much more!</a></p>

<p>Used in the “Wind Rose” chart.</p>

<p><img src="/images/hass-windrose-chart.png" alt="wind rose chart" /></p>

<h2 id="sensor-card">Sensor Card</h2>

<p>There are other weather card views available, but I used the apex chart view. The view is configured manually using YAML.</p>

<script src="https://gist.github.com/51f27869a4ba4cd78d5cf2be8babe70e.js?file=sensor-card.yaml"> </script>

<p><img src="/images/hass-weather-card.png" alt="weather card" /></p>

<h2 id="weather-dashboard">Weather Dashboard</h2>

<p>The entire dashboard is configured through YAML. I found it easiest to edit the YAML in Visual Studio Code, and then copy and paste the whole file, replacing the entire contents of the configuration</p>

<p>It has been split in two parts: the “root” which basically contains templates and DRY default configs for the charts.</p>

<script src="https://gist.github.com/51f27869a4ba4cd78d5cf2be8babe70e.js?file=dashboard-root.yaml"> </script>

<p>Here is the whole dashboard expressed in YAML.</p>

<script src="https://gist.github.com/51f27869a4ba4cd78d5cf2be8babe70e.js?file=dashboard-view.yaml"> </script>

<p><img src="/images/hass-weather-dashboard.png" alt="weather dashboard" /></p>]]></content><author><name></name></author><summary type="html"><![CDATA[I recently got a local outdoor weather station and used it a reason to add more weather data to my Home Assistant setup.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/hass-weather-dashboard.png" /><media:content medium="image" url="https://github.crookster.org/images/hass-weather-dashboard.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Running GPT-OSS 120B on RTX 3080 Ti 12 GB at home</title><link href="https://github.crookster.org/running-gpt-oss-120b-on-rtx-3080-ti-12-gb-at-home/" rel="alternate" type="text/html" title="Running GPT-OSS 120B on RTX 3080 Ti 12 GB at home" /><published>2025-08-25T00:00:00+00:00</published><updated>2025-08-25T00:00:00+00:00</updated><id>https://github.crookster.org/running-gpt-oss-120b-on-rtx-3080-ti-12-gb-at-home</id><content type="html" xml:base="https://github.crookster.org/running-gpt-oss-120b-on-rtx-3080-ti-12-gb-at-home/"><![CDATA[<p>Want to run a 120B parameter modern LLM at home, for a moderate amount of investment? OpenAI GPT OSS, <code class="language-plaintext highlighter-rouge">llama.cpp</code> and a lot of system RAM make it possible.</p>

<h2 id="preparing-your-home-host-hardware">Preparing your home host hardware</h2>

<p>I have an Intel Alder Lake DDR4 system. It has a “gaming” RTX 3080 Ti I bought during a lottery a few years back which has 12 GB VRAM. It also <em>had</em> 32 GB DDR4 RAM (z690 DDR4 motherboard). Then I saw across this on Reddit: <strong>“120B runs awesome on just 8GB VRAM!”</strong> -  <a href="https://www.reddit.com/r/LocalLLaMA/comments/1mke7ef/120b_runs_awesome_on_just_8gb_vram/">https://www.reddit.com/r/LocalLLaMA/comments/1mke7ef/120b_runs_awesome_on_just_8gb_vram/</a></p>

<p>It outlines how to run many LLM layers in the recently released GPT OSS LLM models on the CPU / system RAM. The key info is the <code class="language-plaintext highlighter-rouge">--n-cpu-moe</code> argument in <code class="language-plaintext highlighter-rouge">llama.cpp</code>, and a mention of 64 GB system RAM is <em>minimum</em> for being able to load the <code class="language-plaintext highlighter-rouge">gpt-oss-120B</code> models.</p>

<p>So I found some DDR4 32 GB 3600 MHz modules and <strong>upgraded the system RAM to 128 GB</strong> (cost around $320 altogether) which also happens to be the maximum capacity this motherboard supports. NOTE: the current DDR4 prices are about half of what they were when I first built this system.</p>

<h2 id="installing-the-software-stack-under-linux-using-debian-13-trixie">Installing the software stack (under Linux, using Debian 13 trixie?)</h2>

<p>I did a new Debian 13 (trixie) install onto a 2TB NVMe SSD to have plenty of disk space (those LLM models can get BIG). If I did again, would go instead with Ubuntu if only for its better support of the Nvidia drivers and system power-savings settings. Regardless, I was able to get <code class="language-plaintext highlighter-rouge">trixie</code> working with Nvidia drivers.</p>

<h4 id="installing-the-proprietary-nvidia-drivers">Installing the proprietary Nvidia drivers</h4>

<p>Debian 13 comes with the <code class="language-plaintext highlighter-rouge">nouveau</code> driver set installed/enabled for Nvidia graphics cards. These do not have CUDA support which is necessary to run the LLM software out there on Nvidia hardware. There are Debian wiki instructions on how to get the Nvidia drivers working on <code class="language-plaintext highlighter-rouge">trixie</code> but the following Reddit post seems to be even more up-to-date. I followed it in the end.</p>

<p><a href="https://www.reddit.com/r/debian/comments/1mo9ogq/installing_the_proprietary_nvidia_drivers_on/?q=Setting+up+Nvidia+drivers+on+Debian+13+Trixie&amp;tabId=related">Installing the Proprietary NVIDIA drivers on Debian 13 Trixie </a> [reddit]</p>

<p>Long story short, I used the Debian wiki instructions first to install the drivers, and those instructions got it working with an older release of the Nvidia drivers, but when I installed the CUDA toolkit, they got replaced and the system kept instead using the <code class="language-plaintext highlighter-rouge">nouveau</code> drivers.</p>

<p>It wasn’t until a couple troubleshooting iterations I got the latest Nvidia driver set installed, and got them to be used by blacklisting the nouveau kernel modules.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nano /etc/modprobe.d/blacklist-nouveau.conf
# add contents
sudo update-initramfs -u
</code></pre></div></div>

<p>contents of <code class="language-plaintext highlighter-rouge">blacklist-nouveau.conf</code></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    blacklist nouveau
    options nouveau modeset=0
</code></pre></div></div>

<p>Some friends to make sure the nvidia drivers are being used:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>lspci <span class="nt">-nnk</span> | <span class="nb">grep</span> <span class="nt">-A</span> 3 <span class="nt">-i</span> <span class="s2">"vga"</span>
<span class="c"># Kernel driver in use: nvidia</span>
nvidia-smi
</code></pre></div></div>

<h4 id="install-cuda-toolkit">Install CUDA Toolkit</h4>

<p><a href="https://developer.nvidia.com/cuda-downloads?target_os=Linux&amp;target_arch=x86_64&amp;Distribution=Debian&amp;target_version=12&amp;target_type=deb_network">https://developer.nvidia.com/cuda-downloads?target_os=Linux&amp;target_arch=x86_64&amp;Distribution=Debian&amp;target_version=12&amp;target_type=deb_network</a></p>

<p>The instructions were for Debian 12 <code class="language-plaintext highlighter-rouge">bookworm</code>, but they worked for <code class="language-plaintext highlighter-rouge">trixie</code></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda-toolkit-13-0
</code></pre></div></div>

<h3 id="llamacpp"><code class="language-plaintext highlighter-rouge">llama.cpp</code></h3>

<p><code class="language-plaintext highlighter-rouge">llama.cpp</code> does the heavy lifting of running the LLM models. Itt can download published models from HuggingFace which is what we are going to do.</p>

<p><a href="https://github.com/ggml-org/llama.cpp">https://github.com/ggml-org/llama.cpp</a></p>

<h3 id="build-llamacpp-for-cuda">Build <code class="language-plaintext highlighter-rouge">llama.cpp</code> for CUDA</h3>

<p>We’ll use its CUDA support to use our RTX 30 graphics card.</p>

<p><a href="https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md">https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md</a></p>

<p>First, clone the git repo, as in the instructions. Also, make sure that the CUDA toolkit binaries are in your shell <code class="language-plaintext highlighter-rouge">PATH</code>.</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>which nvcc <span class="c"># /usr/local/cuda/bin</span>
<span class="nb">sudo </span>apt <span class="nb">install </span>cmake cmake-doc
<span class="nb">sudo </span>apt <span class="nb">install </span>ccache
<span class="nb">sudo </span>apt <span class="nb">install </span>libcurl4-openssl-dev

cmake <span class="nt">-B</span> build <span class="nt">-DGGML_CUDA</span><span class="o">=</span>ON <span class="nt">-DCMAKE_CUDA_ARCHITECTURES</span><span class="o">=</span><span class="s2">"75;86"</span>
<span class="c">#cmake -B build -DGGML_CUDA=ON # this was not autodetecting SM version #</span>
cmake <span class="nt">--build</span> build <span class="nt">--config</span> Release
</code></pre></div></div>

<p>It takes a while to build, but when it’s complete, there’ll be fresh <code class="language-plaintext highlighter-rouge">llama</code> binaries in your <code class="language-plaintext highlighter-rouge">build</code> area.</p>

<h3 id="run-llama-server-on-gpt-oss-120b">Run <code class="language-plaintext highlighter-rouge">llama-server</code> on <code class="language-plaintext highlighter-rouge">gpt-oss-120b</code></h3>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./build/bin/llama-server <span class="nt">-hf</span> ggml-org/gpt-oss-120b-GGUF <span class="se">\</span>
    <span class="nt">--n-cpu-moe</span> 36 <span class="se">\</span>
    <span class="nt">--n-gpu-layers</span> 999 <span class="se">\</span>
    <span class="nt">-c</span> 0 <span class="nt">-fa</span> <span class="se">\</span>
    <span class="nt">--jinja</span> <span class="nt">--reasoning-format</span> none <span class="se">\</span>
    <span class="nt">--host</span> 0.0.0.0 <span class="nt">--port</span> 8000 <span class="nt">--api-key</span> <span class="s2">"dummy"</span> <span class="se">\</span>

</code></pre></div></div>

<p>Downloading the 60 GB model files can take a while even at 55 MB/sec.</p>

<ul>
  <li>The CPU layers consume about <strong>60 GB system RAM</strong> (out of 128 GB)</li>
  <li>The GPU layers consume about <strong>6 GB VRAM</strong> (out of 12 GB)</li>
</ul>

<p>I’ve be consistently getting 18-22 tokens/sec output speeds.</p>

<p>See original reddit post referenced above for more on the <code class="language-plaintext highlighter-rouge">--n-cpu-moe</code> and <code class="language-plaintext highlighter-rouge">--cpu-moe</code> options in <code class="language-plaintext highlighter-rouge">llama.cpp</code> along with the <code class="language-plaintext highlighter-rouge">gpt-oss</code> models.</p>

<h2 id="using-gpt-oss-120b">Using GPT OSS 120B</h2>

<p>I really haven’t been using LLMs personally for much of anything but have of course seen posts out there. (Confession: I <em>have</em> been using small GEMMA 3 models running on a M1 Mac mini to auto-tag <a href="https://karakeep.app/"><code class="language-plaintext highlighter-rouge">karakeep</code></a> items, but this is not me actively using an LLM for anything)</p>

<h3 id="an-actual-example">An actual example.</h3>

<p>The time to first token in this setup takes a while, especially for prompts that have larger contexts. Anyway, I have a <a href="https://github.com/idcrook/timetemp3">“Time Temp”</a> python library I’ve using over a decade in some form, and it displays output on LED seven-segment I2C displays, one being the current time: <a href="https://github.com/idcrook/timetemp3/blob/main/timetemp3/my_7segment_clock.py">my_7segment_clock.py</a>.</p>

<p>Can my local LLM come up with a python script to do this? Turns out…</p>

<h4 id="prompt">Prompt</h4>

<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>write a python script script for displaying the current time on an 4 digit 7 segment
LED display connected over an I2C bus to a raspberry pi
</code></pre></div></div>

<p>API call</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl <span class="nt">--request</span> POST <span class="se">\</span>
  <span class="nt">--url</span> http://localhost:8000/completion <span class="se">\</span>
  <span class="nt">--header</span> <span class="s1">'Content-Type: application/json'</span> <span class="se">\</span>
  <span class="nt">--header</span> <span class="s2">"Authorization: Bearer dummy"</span> <span class="se">\</span>
  <span class="nt">--data</span> <span class="s1">'{
    "prompt": "write a python script script for displaying the current time on an 4 digit 7 segment LED display connected over an I2C bus to a raspberry pi",
    "n_predict": 1024}'</span>
</code></pre></div></div>

<h4 id="response">Response</h4>

<p>the response is JSON formatted, a snippet of which I’ll share here</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{"index":0,"content":"?\n\nTo display the current time on a 4-digit 7-segment LED display connected via I2C to a Raspberry Pi, you can use the `smbus2` library for I2C communication and the `datetime` module for obtaining the current time.\n\nHere’s a detailed Python script to accomplish this:\n\n### Hardware Setup\n\n- **4-Digit 7-Segment Display**: Typically ..."}
</code></pre></div></div>

<p>The whole response was formatted in a Markdown document and contained extensive documentation.</p>

<h2 id="markdown-formatted-response">markdown-formatted response</h2>

<p><strong>Included in full below.</strong> Keep in mind this is the exact output response from the LLM resulting from my prompt above including its repeated-word typo. This output kinda blew my mind when I first saw it!</p>

<hr />
<p>?</p>

<p>To display the current time on a 4-digit 7-segment LED display connected via I2C to a Raspberry Pi, you can use the <code class="language-plaintext highlighter-rouge">smbus2</code> library for I2C communication and the <code class="language-plaintext highlighter-rouge">datetime</code> module for obtaining the current time.</p>

<p>Here’s a detailed Python script to accomplish this:</p>

<h3 id="hardware-setup">Hardware Setup</h3>

<ul>
  <li><strong>4-Digit 7-Segment Display</strong>: Typically these displays use a driver like the HT16K33.</li>
  <li><strong>Raspberry Pi</strong>: Ensure the I2C interface is enabled.</li>
  <li><strong>Connections</strong>:
    <ul>
      <li><strong>VCC</strong> to 3.3V (or 5V depending on your display module)</li>
      <li><strong>GND</strong> to GND</li>
      <li><strong>SDA</strong> to SDA (GPIO 2)</li>
      <li><strong>SCL</strong> to SCL (GPIO 3)</li>
    </ul>
  </li>
</ul>

<h3 id="enable-i2c-on-raspberry-pi">Enable I2C on Raspberry Pi</h3>

<ol>
  <li>Open the terminal.</li>
  <li>Run <code class="language-plaintext highlighter-rouge">sudo raspi-config</code>.</li>
  <li>Navigate to “Interfacing Options” -&gt; “I2C” and enable it.</li>
  <li>Reboot your Raspberry Pi.</li>
</ol>

<h3 id="install-required-libraries">Install Required Libraries</h3>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip <span class="nb">install </span>smbus2
</code></pre></div></div>

<h3 id="python-script">Python Script</h3>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">datetime</span>
<span class="kn">from</span> <span class="nn">smbus2</span> <span class="kn">import</span> <span class="n">SMBus</span><span class="p">,</span> <span class="n">i2c_msg</span>

<span class="c1"># I2C address of the HT16K33 (adjust if different)
</span><span class="n">HT16K33_ADDRESS</span> <span class="o">=</span> <span class="mh">0x70</span>

<span class="c1"># Define segment patterns for digits 0-9
</span><span class="n">SEGMENT_MAP</span> <span class="o">=</span> <span class="p">{</span>
    <span class="s">'0'</span><span class="p">:</span> <span class="mh">0x3F</span><span class="p">,</span>
    <span class="s">'1'</span><span class="p">:</span> <span class="mh">0x06</span><span class="p">,</span>
    <span class="s">'2'</span><span class="p">:</span> <span class="mh">0x5B</span><span class="p">,</span>
    <span class="s">'3'</span><span class="p">:</span> <span class="mh">0x4F</span><span class="p">,</span>
    <span class="s">'4'</span><span class="p">:</span> <span class="mh">0x66</span><span class="p">,</span>
    <span class="s">'5'</span><span class="p">:</span> <span class="mh">0x6D</span><span class="p">,</span>
    <span class="s">'6'</span><span class="p">:</span> <span class="mh">0x7D</span><span class="p">,</span>
    <span class="s">'7'</span><span class="p">:</span> <span class="mh">0x07</span><span class="p">,</span>
    <span class="s">'8'</span><span class="p">:</span> <span class="mh">0x7F</span><span class="p">,</span>
    <span class="s">'9'</span><span class="p">:</span> <span class="mh">0x6F</span><span class="p">,</span>
    <span class="s">'-'</span><span class="p">:</span> <span class="mh">0x40</span><span class="p">,</span>
    <span class="s">' '</span><span class="p">:</span> <span class="mh">0x00</span><span class="p">,</span>
    <span class="s">':'</span><span class="p">:</span> <span class="mh">0x02</span>  <span class="c1"># Colon (optional, depending on display)
</span><span class="p">}</span>

<span class="k">def</span> <span class="nf">init_display</span><span class="p">(</span><span class="n">bus</span><span class="p">):</span>
    <span class="c1"># Turn on the oscillator
</span>    <span class="n">bus</span><span class="p">.</span><span class="n">write_byte_data</span><span class="p">(</span><span class="n">HT16K33_ADDRESS</span><span class="p">,</span> <span class="mh">0x21</span><span class="p">,</span> <span class="mh">0x00</span><span class="p">)</span>
    <span class="c1"># Set display to no blink
</span>    <span class="n">bus</span><span class="p">.</span><span class="n">write_byte_data</span><span class="p">(</span><span class="n">HT16K33_ADDRESS</span><span class="p">,</span> <span class="mh">0x81</span><span class="p">,</span> <span class="mh">0x00</span><span class="p">)</span>
    <span class="c1"># Set brightness (0x0 to 0xF)
</span>    <span class="n">bus</span><span class="p">.</span><span class="n">write_byte_data</span><span class="p">(</span><span class="n">HT16K33_ADDRESS</span><span class="p">,</span> <span class="mh">0xEF</span><span class="p">,</span> <span class="mh">0x00</span><span class="p">)</span>

<span class="k">def</span> <span class="nf">set_digit</span><span class="p">(</span><span class="n">bus</span><span class="p">,</span> <span class="n">position</span><span class="p">,</span> <span class="n">value</span><span class="p">):</span>
    <span class="c1"># Write the segment value for a specific digit position
</span>    <span class="n">bus</span><span class="p">.</span><span class="n">write_byte_data</span><span class="p">(</span><span class="n">HT16K33_ADDRESS</span><span class="p">,</span> <span class="n">position</span> <span class="o">*</span> <span class="mi">2</span><span class="p">,</span> <span class="n">value</span><span class="p">)</span>

<span class="k">def</span> <span class="nf">display_time</span><span class="p">(</span><span class="n">bus</span><span class="p">,</span> <span class="n">hour</span><span class="p">,</span> <span class="n">minute</span><span class="p">):</span>
    <span class="c1"># Format the time as a string
</span>    <span class="n">time_str</span> <span class="o">=</span> <span class="sa">f</span><span class="s">"</span><span class="si">{</span><span class="n">hour</span><span class="si">:</span><span class="mi">02</span><span class="si">}{</span><span class="n">minute</span><span class="si">:</span><span class="mi">02</span><span class="si">}</span><span class="s">"</span>
    <span class="c1"># Split the string into individual digits
</span>    <span class="n">digits</span> <span class="o">=</span> <span class="p">[</span><span class="n">SEGMENT_MAP</span><span class="p">[</span><span class="n">d</span><span class="p">]</span> <span class="k">for</span> <span class="n">d</span> <span class="ow">in</span> <span class="n">time_str</span><span class="p">]</span>
    <span class="c1"># Set the display digits
</span>    <span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">digit</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">digits</span><span class="p">):</span>
        <span class="n">set_digit</span><span class="p">(</span><span class="n">bus</span><span class="p">,</span> <span class="n">i</span><span class="p">,</span> <span class="n">digit</span><span class="p">)</span>
    <span class="c1"># Optional: Set the colon (if supported)
</span>    <span class="c1"># bus.write_byte_data(HT16K33_ADDRESS, 0x04, 0x02)  # Turn on colon
</span>
<span class="k">def</span> <span class="nf">main</span><span class="p">():</span>
    <span class="n">bus</span> <span class="o">=</span> <span class="n">SMBus</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># Use I2C bus 1
</span>    <span class="n">init_display</span><span class="p">(</span><span class="n">bus</span><span class="p">)</span>

    <span class="k">try</span><span class="p">:</span>
        <span class="k">while</span> <span class="bp">True</span><span class="p">:</span>
            <span class="n">now</span> <span class="o">=</span> <span class="n">datetime</span><span class="p">.</span><span class="n">datetime</span><span class="p">.</span><span class="n">now</span><span class="p">()</span>
            <span class="n">hour</span> <span class="o">=</span> <span class="n">now</span><span class="p">.</span><span class="n">hour</span> <span class="o">%</span> <span class="mi">12</span>  <span class="c1"># 12-hour format
</span>            <span class="k">if</span> <span class="n">hour</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
                <span class="n">hour</span> <span class="o">=</span> <span class="mi">12</span>
            <span class="n">minute</span> <span class="o">=</span> <span class="n">now</span><span class="p">.</span><span class="n">minute</span>
            <span class="n">display_time</span><span class="p">(</span><span class="n">bus</span><span class="p">,</span> <span class="n">hour</span><span class="p">,</span> <span class="n">minute</span><span class="p">)</span>
            <span class="n">time</span><span class="p">.</span><span class="n">sleep</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span>
    <span class="k">except</span> <span class="nb">KeyboardInterrupt</span><span class="p">:</span>
        <span class="c1"># Clear the display on exit
</span>        <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">4</span><span class="p">):</span>
            <span class="n">set_digit</span><span class="p">(</span><span class="n">bus</span><span class="p">,</span> <span class="n">i</span><span class="p">,</span> <span class="n">SEGMENT_MAP</span><span class="p">[</span><span class="s">' '</span><span class="p">])</span>
        <span class="n">bus</span><span class="p">.</span><span class="n">close</span><span class="p">()</span>

<span class="k">if</span> <span class="n">__name__</span> <span class="o">==</span> <span class="s">'__main__'</span><span class="p">:</span>
    <span class="n">main</span><span class="p">()</span>
</code></pre></div></div>

<h3 id="explanation">Explanation</h3>

<ul>
  <li><strong>Segment Map</strong>: Defines the patterns for each digit on the 7-segment display.</li>
  <li><strong><code class="language-plaintext highlighter-rouge">init_display</code></strong>: Initializes the HT16K33 display driver.</li>
  <li><strong><code class="language-plaintext highlighter-rouge">set_digit</code></strong>: Writes the segment data to a specific position.</li>
  <li><strong><code class="language-plaintext highlighter-rouge">display_time</code></strong>: Converts the current time to a format suitable for the display and updates it.</li>
  <li><strong><code class="language-plaintext highlighter-rouge">main</code></strong>: Continuously updates the display with the current time every second.</li>
</ul>

<h3 id="additional-notes">Additional Notes</h3>

<ul>
  <li>Adjust the I2C address if your display uses a different one.</li>
  <li>The colon display (<code class="language-plaintext highlighter-rouge">SEGMENT_MAP[':']</code>) might need to be handled differently depending on your specific display module.</li>
  <li>Ensure your display’s wiring is correct and that the Raspberry Pi’s I2C bus is properly enabled.</li>
</ul>

<p>This script should provide a basic framework for displaying the current time on</p>

<hr />

<h2 id="llama-server-running"><code class="language-plaintext highlighter-rouge">llama-server</code> running</h2>

<p><img src="/images/llama-gpt-oss-120b.png" alt="llama.cpp server running with API request screenshot" /></p>]]></content><author><name></name></author><summary type="html"><![CDATA[Want to run a 120B parameter modern LLM at home, for a moderate amount of investment? OpenAI GPT OSS, llama.cpp and a lot of system RAM make it possible.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/llama-gpt-oss-120b.png" /><media:content medium="image" url="https://github.crookster.org/images/llama-gpt-oss-120b.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Using PC Fan (12V) for 10-Inch Rackmount Setup</title><link href="https://github.crookster.org/using-pc-fan-(12v)-for-10-inch-rackmount-setup/" rel="alternate" type="text/html" title="Using PC Fan (12V) for 10-Inch Rackmount Setup" /><published>2025-07-23T00:00:00+00:00</published><updated>2025-07-23T00:00:00+00:00</updated><id>https://github.crookster.org/using-pc-fan-(12v)-for-10-inch-rackmount-setup</id><content type="html" xml:base="https://github.crookster.org/using-pc-fan-(12v)-for-10-inch-rackmount-setup/"><![CDATA[<p>I recently put into use a 10-inch rack mount for self-hosting/home lab. Here’s how I am running the cooling fan now.</p>

<p><img src="/images/usba-12V-trig-in-PDU_sm4.jpg" alt="power converter running the fan" /></p>

<p>The wires running to the fan cooling the rackmount can be spotted below the USB power supply.</p>

<h2 id="10-inch-rackmount-setup">10-inch rackmount setup</h2>

<p>The <em>DeskPi RackMate T1 Rackmount</em> comes with an optional bottom fan-mount that accepts <code class="language-plaintext highlighter-rouge">2 X 80 mm</code> or <code class="language-plaintext highlighter-rouge">1 X 120 mm</code> fans for blowing air around aka cooling. I have spare <code class="language-plaintext highlighter-rouge">120 mm</code> PC cooling fans so I attached one of those when I built it. The spare PC fans run off 12V and have a 3-pin PC motherboard fan connector as well as 4-pin Molex connectors.</p>

<p>When I installed originally, I was using a 12V feeder that came off a Raspberry Pi Power-over-Ethernet (PoE) HAT that was running in the rack. This served perfectly fine even running through the PoE connection if the OS on the Raspberry Pi was shutdown.</p>

<p><img src="/images/10inch-rack-redundant-pdu_sm6.jpg" alt="10-inch rackmount surge strips" /></p>

<p>In the photo there is a white plastic and a black plastic power strip that each have 1 USB-C and 2 USB-A power output ports. You can spot the “orange” USB ports which some use to signify power-only ports.</p>

<h3 id="powering-the-cooling-fan-from-usb">Powering the Cooling Fan from USB</h3>

<p>Why change? Eliminating the particular PoE HAT/Pi dependency, as well as installing an improvised Power Distribution Unit (“PDU”) were reasons. I found some 10-inch power strips that mount exactly into a 10-inch server mini-rack. Additionally these have USB-C PD and USB-A QC sockets, which can output 9V and 12V as well as the standard USB 5V.</p>

<p>To access the 12V for the PC fan, a “trigger” device is employed that tells the USB port to output 12V. These devices are widely available and can support USB-PD, QC, and USB-A or USB-C. I had some in stock so let’s use them.</p>

<h2 id="creating-a-usb-powered-pc-fan-supply">Creating a USB-Powered PC Fan Supply</h2>

<p><img src="/images/usbc-12V-trig_sm3.jpg" alt="USB-C PD trigger 12V" /></p>

<p>Have 22 AWG stranded hook-up wire. Soldered into the positive and negative outputs of the USB-C input trigger PCB.</p>

<p>In the photo, the output holes were drilled for 24 AWG diameter wire, but I was able to make the wider stranded 22 AWG wire work. You can also see the (factory-set) solder jumper - a zero-ohm resistor - on the board setting the <code class="language-plaintext highlighter-rouge">12V</code> but 9V, 15V, and 20V also available.</p>

<p><img src="/images/terminals-in-fan-molex-conn_sm2.jpg" alt="terminals pluggin into 12V/Gnd wires on Molex connector" /></p>

<p>I have some tubular connector crimp terminals that happen to closely match the diameter of the female Molex connector, and so put these on the other end of the 22AWG hookup wires. Just a friction fit – no soldering on this end.</p>

<p><img src="/images/terminal-crimp-tool_sm5.jpg" alt="terminal crimping tool" /></p>

<p>Crimp tool was used, but I found that crushing the shroud and metal tube on the tubular bullet connector with a pliers held the wire in place better.</p>

<p><img src="/images/usbc-12v-fan-testing_sm1.jpg" alt="12V PC Fan hooked up to USB power supply" /></p>

<p>It turns out that the USB-C trigger converter– while labeled for USB-PD– also worked fine with the USB-A QC USB power supplies.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I recently put into use a 10-inch rack mount for self-hosting/home lab. Here’s how I am running the cooling fan now.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/usbc-12V-trig.jpg" /><media:content medium="image" url="https://github.crookster.org/images/usbc-12V-trig.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Seedomatic - Using Home Assistant to Automate Seedlings</title><link href="https://github.crookster.org/seedomatic-using-a-pico-w-and-home-assistant-to-automate-seedlings/" rel="alternate" type="text/html" title="Seedomatic - Using Home Assistant to Automate Seedlings" /><published>2025-03-25T00:00:00+00:00</published><updated>2025-03-25T00:00:00+00:00</updated><id>https://github.crookster.org/seedomatic---using-a-pico-w-and-home-assistant-to-automate-seedlings</id><content type="html" xml:base="https://github.crookster.org/seedomatic-using-a-pico-w-and-home-assistant-to-automate-seedlings/"><![CDATA[<p>I have a vegetable garden and start seedlings indoors during the winter. I recently renovated my automation setup for this to be based in Home Assistant.</p>

<h2 id="what-do-i-need-automation-for">What Do I Need Automation For?</h2>

<p>It comes down to temperature control. Seeds germinate best under certain conditions (moisture, temperature, etc.) with the main thing I needed to do was soil temperature control.</p>

<p>I have seedling tray-sized heat mats. There is no temperature monitoring included on these- just power or no power. This was a simple enough problem to solve:</p>

<ol>
  <li>Measure temperature</li>
  <li>If temperature is above target temperature, turn heat pad <strong>OFF</strong>.</li>
  <li>If temperature is below target temperature, turn heat pad <strong>ON</strong>.</li>
  <li>Repeat cycle</li>
</ol>

<p><img src="/images/yaha_temp_mon_oled_wires_sm1.jpeg" alt="Pico W wiring and OLED display &lt;&gt;" title="Pico W temperature monitor with optional OLED display" /></p>

<h3 id="previous-solution-mqtt-wemo-smart-outlets-and-python-script">Previous solution: MQTT, Wemo smart outlets and Python script</h3>

<p>I have a few Wemo smart outlets. There is a python library for monitoring and controlling these over the network. So the <strong>ON</strong>/<strong>OFF</strong> aspect of the heat pads can be done using these.</p>

<p>MQTT is a reliable PubSub solution. Useful in home automation, it runs great within a  home LAN. My MQTT broker is <code class="language-plaintext highlighter-rouge">mosquitto</code> running on a Raspberry Pi. You can publish sensor data to MQTT from wireless nodes like ESP32 or Pico W micro-controllers. This sensor data can be consumed by other processes.</p>

<p>Since it was in python, it was simple enough to switch to a different platform (Wemo to Kasa) since there are existing python libraries available. This was running as a systemd service on a Raspberry Pi.</p>

<ul>
  <li><a href="https://github.com/idcrook/indoor-seedomation?tab=readme-ov-file#indoor-seedomation">indoor-seedomation - python temperature monitor and heater control </a></li>
</ul>

<p>I have a Raspberry Pi Pico W (“W” is for Wi-Fi, I guess) microcontroller for the sensor part. Add MicroPython and DS18B20 waterproof temperature sensor that can be embedded directly in the seedling trays and soil, and now the temperature sensing is handled.</p>

<ul>
  <li><a href="https://github.com/idcrook/picow-projects/tree/main/multi_monitor">MQTT sensor node on Pico W</a></li>
</ul>

<h2 id="updated-solution-home-assistant-mqtt-discovery-for-sensors">Updated Solution: Home Assistant MQTT Discovery for Sensors</h2>

<p>Over the winter I set up a Raspberry Pi running Home Assistant OS. I migrated my collection of zigbee devices and basically everything else away from smartthings and Homebridge.  Home Assistant ecosystem has an incredible number of well-supported integrations! What if I could use some native features of Home Assistant to take over the temperature control task for my seedlings?</p>

<p>I just love the productivity available with micropython. Can interate quickly, even running a REPL on target device for debug exploration. I started from my previous Pico W MQTT sensor node, and investigated something in Hass called <a href="https://www.home-assistant.io/integrations/mqtt/#mqtt-discovery">MQTT (Auto) Discovery</a>. Turns out once a few important concepts are understood, it’s relatively simple to turn any sensor into one recognized by Hass. It’s mainly just JSON messages published into an agreed-upon MQTT hierarchy, and then it “Just Works”.</p>

<p><a href="https://github.com/idcrook/yaha_temperature_monitor">Yet Another Home Assistant Temperature monitor, using micropython and Pico W</a></p>

<h3 id="microcontroller-wi-fi-issues">Microcontroller Wi-Fi Issues?</h3>

<p>In a “word”: <a href="https://github.com/idcrook/yaha_temperature_monitor/blob/7fda99e39fb2ca673fdd78e5ac31396a759f5933/main.py#L351-L358"><em>Hardware Watchdog Timer</em></a></p>

<p>One area I greatly improved of for this second generation was more robust wireless networking. Making use of the Hardware Watchdog Timer available in micropython and on the Pico W, I was able to have my sensor nodes stay up even across rebooting routers or flaky network signaling.</p>

<p>Whereas previously, I would have to go unplug and re-plug in the Pico W whenever I rebooted a router or it lost its network connection, now it resets itself, and can consistently re-connect to wireless network without physical intervention.</p>

<h3 id="home-assistant-has-a-built-in-generic-thermostat">Home Assistant Has A Built-in “Generic Thermostat”</h3>

<p>Home Assistant Has A Built-in “Generic Thermostat” Device. Just connect a temperature (sensor) and in my case a smart power strip outlet (control), and set your desired temperature range and it will take care of monitoring and control.</p>

<p><img src="/images/bed_heat_mat_temp.png" alt="Heat Mat Bed 2 soil Temperature &lt;&gt;" title="Heat Mat Bed 2 soil Temperature" />
<img src="/images/bed_heater_control.png" alt="Heat Mat on off control &lt;&gt;" title="Heat Mat on off control" /></p>

<p>Since these are now Hass native controls and devices, they can be arranged like any other dashboard.</p>

<p><img src="/images/three_bed_temps.png" alt="Bed temperatures, ambients, thermostats, and smart strip &lt;&gt;" title="Bed temperatures, ambients, thermostats, and smart strip" /></p>

<h2 id="conclusion">Conclusion</h2>

<p>I have moved almost my entire “smart home” over to Home Assistant, and now that I am familiar with some of its internals, I suspect even more will be added to my home’s configuration over time.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I have a vegetable garden and start seedlings indoors during the winter. I recently renovated my automation setup for this to be based in Home Assistant.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/three_bed_temps.png" /><media:content medium="image" url="https://github.crookster.org/images/three_bed_temps.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How Do I Update Docker Container/Image Version in QNAP Container Station?</title><link href="https://github.crookster.org/how-do-i-update-docker-container-image-version-in-qnap-container-station/" rel="alternate" type="text/html" title="How Do I Update Docker Container/Image Version in QNAP Container Station?" /><published>2025-03-14T00:00:00+00:00</published><updated>2025-03-14T00:00:00+00:00</updated><id>https://github.crookster.org/how-do-i-update-docker-container-image-version-in-qnap-container-station</id><content type="html" xml:base="https://github.crookster.org/how-do-i-update-docker-container-image-version-in-qnap-container-station/"><![CDATA[<p>I needed to update a service running in docker container (<a href="https://pi-hole.net/">Pi-hole – Network-wide Ad Blocking</a>) on my QNAP NAS. It is running as a container and not an “app” (<code class="language-plaintext highlighter-rouge">docker compose</code> in QNAP vernacular) and was originally created by GUI. Googling for how to do using the QNAP QTS Container Station GUI only worked for apps. I eventually have figured out a general way to do what I need, using command line docker on the QNAP.</p>

<h2 id="recreating-a-container">Recreating a container?</h2>

<p>It turns out there is no way in the QNAP Container Station web app GUI to “recreate” a container with the updated-to-latest-version image. Annoyingly, this works as desired in the “Apps” area of Container Station, but those are based on <code class="language-plaintext highlighter-rouge">docker compose</code> services and not the more simple single container.</p>

<p>Container Station loses track of which image is allocated to your container if you manually pull an image in Images interface– and then disables the “Re-create” item for the container.  However, if you “re-create” the container, thinking it will pull the <code class="language-plaintext highlighter-rouge">latest</code> tagged image and use that for your “re-created” container, you’d be mistaken. It just re-uses the existing (older) image.</p>

<h2 id="using-virtual-networking-in-container-station">Using Virtual Networking in Container Station</h2>

<p>It turns out that Container Station is syntactic sugar around a standard Linux docker install. It knows how to map filesystem shares, special networking, and subset of devices to services or containers. Understanding this is key to being able to update image version for existing container.</p>

<h3 id="needing-a-static-ip-on-home-lan-for-pi-hole">Needing a static IP on home LAN for <code class="language-plaintext highlighter-rouge">pi-hole</code></h3>

<p>I started using <code class="language-plaintext highlighter-rouge">pi-hole</code> on home LAN, and the simplest way to enable it on all network clients is by using  <code class="language-plaintext highlighter-rouge">pi-hole</code> server as DNS server. How this works: <code class="language-plaintext highlighter-rouge">pi-hole</code> server IP is added as a manually specified DNS server on router. Then, when the router issues DHCP assignments, it includes <code class="language-plaintext highlighter-rouge">pi-hole</code>’s IP in list of DNS servers provided to clients.</p>

<p>For this to work, <code class="language-plaintext highlighter-rouge">pi-hole</code> server IP should be well-known, i.e., static DHCP reservation.  And for that to work, the MAC address of pi-hole server needs to be set in router DHCP reservation table so that it assigns the static IP to pi-hole service/server.</p>

<p>In our scenario, the pi-hole server is really only a docker container running at a virtualized network address managed by the NAS. It can be used to assign multiple unique IPs to docker container services, since it is a virtualized IP. and it uses a QNET interface – something specific to QNAP Constainer Station and docker implementation.  I was not able to determine how to reliably configure the QNET interfaces, so I had container station do it for me!</p>

<p>It’s simply a process of using Container Station container UI to create the desired container, setting your required environment variables, and including host networking found in Advanced configuration, where a virtual interface with corresponding randomly-chosen MAC address can be allocated.</p>

<p><img src="/images/container_advanced_networking_host.png" alt="including host networking found in Advanced configuration &lt;&gt;" title="Including host networking found in Advanced configuration" /></p>

<h2 id="using-docker-cli-and-some-utilities-to-update-version">Using docker CLI and some utilities to update version</h2>

<p>So now we have a container running using the QNAP-native QNET virtual network interface allocation. How do we update image version for container?  We can use command line on QNAP server.  So remote login to your NAS using <code class="language-plaintext highlighter-rouge">ssh</code>.</p>

<h3 id="figure-out-docker-command-line-used-to-create-image">Figure out docker command line used to create image</h3>

<p>There are couple useful utilities we’ll employ using docker itself to determine the command line we’ll use to re-start our container with the latest image.</p>

<p>To get <code class="language-plaintext highlighter-rouge">YOUR-CONTAINER</code> you can use</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span>~] <span class="c"># docker ps</span>
CONTAINER ID   IMAGE                                COMMAND                  CREATED      STATUS                PORTS                    NAMES
132ed565d542   pihole/pihole:latest                 <span class="s2">"start.sh"</span>               8 days ago   Up 8 days <span class="o">(</span>healthy<span class="o">)</span>                            pihole-1
</code></pre></div></div>

<p>Above <code class="language-plaintext highlighter-rouge">132ed565d542</code> (CONTAINER ID) is <code class="language-plaintext highlighter-rouge">YOUR-CONTAINER</code>.</p>
<h4 id="part-1">Part 1</h4>

<p>Using <a href="https://github.com/lavie/runlike"><code class="language-plaintext highlighter-rouge">runlike</code></a></p>

<p>Found via <a href="https://stackoverflow.com/a/32774347/47850">https://stackoverflow.com/a/32774347/47850</a></p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="nt">--rm</span> <span class="nt">-v</span> /var/run/docker.sock:/var/run/docker.sock:ro <span class="se">\</span>
    assaflavie/runlike YOUR-CONTAINER
</code></pre></div></div>

<p>yeilds something like:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="nt">--name</span><span class="o">=</span>pihole-1 <span class="nt">--hostname</span><span class="o">=</span>3467123adbad <span class="nt">--mac-address</span><span class="o">=</span>02:03:04:05:06:07 <span class="se">\</span>
 <span class="nt">--volume</span> /share/Software/Configs/pihole:/etc/pihole <span class="nt">--env</span><span class="o">=</span><span class="s1">'FTLCONF_webserver_api_password=SUPER SECRET'</span> <span class="se">\</span>
 <span class="nt">--env</span><span class="o">=</span><span class="nv">FTLCONF_dns_listeningMode</span><span class="o">=</span>all <span class="nt">--env</span><span class="o">=</span><span class="nv">TZ</span><span class="o">=</span>America/Denver <span class="nt">--privileged</span> <span class="nt">--network</span><span class="o">=</span>qnet-dhcp-eth0-abcdef <span class="se">\</span>
 <span class="nt">--workdir</span><span class="o">=</span>/ <span class="nt">--restart</span><span class="o">=</span>unless-stopped <span class="nt">--log-opt</span> max-file<span class="o">=</span>10 <span class="nt">--log-opt</span> max-size<span class="o">=</span>10m <span class="nt">--runtime</span><span class="o">=</span>runc <span class="nt">--detach</span><span class="o">=</span><span class="nb">true</span> <span class="nt">-t</span> <span class="se">\</span>
 pihole/pihole:latest
</code></pre></div></div>

<p>Notice this includes the <code class="language-plaintext highlighter-rouge">--mac-address=02:03:04:05:06:07</code></p>

<h4 id="part-2">Part 2</h4>

<p>We are going to actually use most of the output from this Part 2 to form the command line. It uses clever docker templates to fill in readable command line.</p>

<p><a href="https://gist.github.com/efrecon/8ce9c75d518b6eb863f667442d7bc679#file-run-tpl">https://gist.github.com/efrecon/8ce9c75d518b6eb863f667442d7bc679#file-run-tpl</a></p>

<p>Found via <a href="https://stackoverflow.com/a/38077377/47850">https://stackoverflow.com/a/38077377/47850</a></p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker inspect <span class="nt">--format</span> <span class="s2">"</span><span class="si">$(</span>curl <span class="nt">-s</span> https://gist.githubusercontent.com/efrecon/8ce9c75d518b6eb863f667442d7bc679/raw/run.tpl<span class="si">)</span><span class="s2">"</span> <span class="se">\</span>
 YOUR-CONTAINER
</code></pre></div></div>

<p>NOTE: Most of the information is the same, and some more information is included, but not everything is there.</p>

<h3 id="trim-and-fill-in-any-missing-pieces-in-command-line">Trim and Fill in any missing pieces in command line</h3>

<p>The template version may be missing some important networking information, in this case the <code class="language-plaintext highlighter-rouge">--mac-address</code> we want to re-use. This avoids having to update the DHCP reservation settings when we re-create.  If a different MAC address was chosen, that the DHCP server will allocate a random different IP address.</p>

<ul>
  <li>tell docker to use specific MAC address
    <ul>
      <li><code class="language-plaintext highlighter-rouge">--mac-address=02:03:04:05:06:07 \</code></li>
    </ul>
  </li>
</ul>

<p>Also, instead of re-using <code class="language-plaintext highlighter-rouge">--privileged</code>, can be explicit with required capabilities (something the Container Station container GUI doesn’t allow for!). For example, add for NTP:</p>

<ul>
  <li>capability to set time
    <ul>
      <li><code class="language-plaintext highlighter-rouge">--cap-add=CAP_SYS_TIME \</code></li>
    </ul>
  </li>
  <li>expose NTP port
    <ul>
      <li><code class="language-plaintext highlighter-rouge">--expose "123/udp" \</code></li>
    </ul>
  </li>
</ul>

<p>Can also drop the numerous <code class="language-plaintext highlighter-rouge">--label</code>s since they will be pulled in from image when you run.</p>

<h2 id="summary">Summary</h2>

<p>Full command line example (run on QNAP NAS server over <code class="language-plaintext highlighter-rouge">ssh</code>) using values extracted from the utilities above to re-create container uses latest image.</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># stop existing container</span>
docker stop pihole-1

<span class="c"># rename (since we are going to update/re-create with same name as before)</span>
docker rename pihole-1 pihole-1-old

<span class="c"># start up again with curated command line</span>
docker run <span class="se">\</span>
  <span class="nt">--name</span> <span class="s2">"/pihole-1"</span> <span class="se">\</span>
  <span class="nt">--runtime</span> <span class="s2">"runc"</span> <span class="se">\</span>
  <span class="nt">--volume</span> <span class="s2">"/share/Software/Configs/pihole:/etc/pihole"</span> <span class="se">\</span>
  <span class="nt">--log-driver</span> <span class="s2">"json-file"</span> <span class="se">\</span>
  <span class="nt">--log-opt</span> max-file<span class="o">=</span><span class="s2">"10"</span> <span class="se">\</span>
  <span class="nt">--log-opt</span> max-size<span class="o">=</span><span class="s2">"10m"</span> <span class="se">\</span>
  <span class="nt">--restart</span> <span class="s2">"unless-stopped"</span> <span class="se">\</span>
  <span class="nt">--cap-add</span> <span class="s2">"CAP_SYS_TIME"</span> <span class="se">\</span>
<span class="nt">--mac-address</span><span class="o">=</span>02:03:04:05:06:07 <span class="se">\</span>
  <span class="nt">--network</span> <span class="s2">"qnet-dhcp-eth0-abcdef"</span> <span class="se">\</span>
  <span class="nt">--hostname</span> <span class="s2">"3467123adbad"</span> <span class="se">\</span>
  <span class="nt">--expose</span> <span class="s2">"123/udp"</span> <span class="se">\</span>
  <span class="nt">--expose</span> <span class="s2">"443/tcp"</span> <span class="se">\</span>
  <span class="nt">--expose</span> <span class="s2">"53/tcp"</span> <span class="se">\</span>
  <span class="nt">--expose</span> <span class="s2">"53/udp"</span> <span class="se">\</span>
  <span class="nt">--expose</span> <span class="s2">"80/tcp"</span> <span class="se">\</span>
  <span class="nt">--env</span> <span class="s2">"FTLCONF_webserver_api_password=SUPER SECRET"</span> <span class="se">\</span>
  <span class="nt">--env</span> <span class="s2">"FTLCONF_dns_listeningMode=all"</span> <span class="se">\</span>
  <span class="nt">--env</span> <span class="s2">"DNSMASQ_USER=pihole"</span> <span class="se">\</span>
  <span class="nt">--env</span> <span class="s2">"FTL_CMD=no-daemon"</span> <span class="se">\</span>
  <span class="nt">--env</span> <span class="s2">"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"</span> <span class="se">\</span>
  <span class="nt">--env</span> <span class="s2">"TZ=America/Denver"</span> <span class="se">\</span>
  <span class="nt">--detach</span> <span class="se">\</span>
  <span class="nt">--tty</span> <span class="se">\</span>
  <span class="nt">--interactive</span> <span class="se">\</span>
  <span class="nt">--entrypoint</span> <span class="s2">"start.sh"</span> <span class="se">\</span>
  <span class="s2">"pihole/pihole:latest"</span><span class="sb">`</span> <span class="se">\</span>


</code></pre></div></div>

<p>When you execute the <code class="language-plaintext highlighter-rouge">docker run</code> command, it should pull the latest image as tagged <code class="language-plaintext highlighter-rouge">"pihole/pihole:latest"</code></p>

<p>Once the image pulls and starts up, it should appear in Container Station (including logs and network interface mappings) as before.  And the router DHCP assignment can use the same mapping for IP address, since it appears to have the same MAC address as previously.</p>

<p><img src="/images/container_station_running_container_info.png" alt="Docker container running in QNAP Container Station &lt;&gt;" title="Docker container running in QNAP Container Station" /></p>]]></content><author><name></name></author><summary type="html"><![CDATA[I needed to update a service running in docker container (Pi-hole – Network-wide Ad Blocking) on my QNAP NAS. It is running as a container and not an “app” (docker compose in QNAP vernacular) and was originally created by GUI. Googling for how to do using the QNAP QTS Container Station GUI only worked for apps. I eventually have figured out a general way to do what I need, using command line docker on the QNAP.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/container_advanced_networking_host.png" /><media:content medium="image" url="https://github.crookster.org/images/container_advanced_networking_host.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Comparing Physical Dimensions of MacBook Air 15-Inch (2023)</title><link href="https://github.crookster.org/comparing-physical-dimensions-of-macbook-air-15-inch-(2023)/" rel="alternate" type="text/html" title="Comparing Physical Dimensions of MacBook Air 15-Inch (2023)" /><published>2023-06-08T00:00:00+00:00</published><updated>2023-06-08T00:00:00+00:00</updated><id>https://github.crookster.org/comparing-physical-dimensions-of-macbook-air-15-inch-(2023)</id><content type="html" xml:base="https://github.crookster.org/comparing-physical-dimensions-of-macbook-air-15-inch-(2023)/"><![CDATA[<p>Comparing physical dimensions of the new MacBook Air 15-Inch (2023) with select 13-inch and 14-inch MacBook models. Amazing feats performs with shrinking bezels and thin components.</p>

<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->

<ul>
  <li><a href="#dimensions">Dimensions</a></li>
  <li><a href="#display">Display</a></li>
  <li><a href="#model-comparisons-to-new-15-inch-macbook-air">Model Comparisons to new 15-inch MacBook Air</a>
    <ul>
      <li><a href="#14-inch-macbook-pro-m2-pro-2023">14-inch MacBook Pro (M2 Pro, 2023)</a></li>
      <li><a href="#13-inch-macbook-pro-2015">13-inch MacBook Pro (2015)</a></li>
      <li><a href="#13-inch-macbook-air-2010">13-inch MacBook Air (2010)</a></li>
    </ul>
  </li>
  <li><a href="#sources">Sources</a></li>
</ul>

<!-- markdown-toc end -->

<h2 id="dimensions">Dimensions</h2>

<p>MacBook Air 15-Inch (2023) <img src="/images/mba_15_display__cylrbo8pinsm_large.jpg" alt="MacBook Air 15-inch (2023) image &lt;&gt;" title="MacBook Air 15-inch (2022) image" /></p>

<p>Table comparing dimensions with nearest to new 15-inch MacBook Air <strong>bolded</strong>:</p>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>Height</th>
      <th>Width</th>
      <th>Depth</th>
      <th>Weight</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>13-inch MacBook Air (2010)</td>
      <td>0.68 (1.7 cm)</td>
      <td><strong>12.8 (32.5 cm)</strong></td>
      <td><strong>8.94 (22.7 cm)</strong></td>
      <td>2.9 lbs (1.32 kg)</td>
    </tr>
    <tr>
      <td>13-inch MacBook Pro (Early 2015)</td>
      <td>0.71 (1.8 cm)</td>
      <td>12.53 (31.4 cm)</td>
      <td>8.62 (21.9 cm)</td>
      <td><strong>3.48 lbs (1.58 kg)</strong></td>
    </tr>
    <tr>
      <td>13-inch MacBook Air (M2, 2022)</td>
      <td><strong>0.44 (1.13 cm)</strong></td>
      <td>11.97 (30.41)</td>
      <td>8.46 (21.5 cm)</td>
      <td>2.7 lb (1.24 kg)</td>
    </tr>
    <tr>
      <td>14-inch MacBook Pro (M2 Pro, 2023)</td>
      <td>0.61 (1.55 cm)</td>
      <td>12.31 (31.26 cm)</td>
      <td>8.71 (22.12 cm)</td>
      <td><strong>3.5 lb (1.6 kg)</strong></td>
    </tr>
    <tr>
      <td><em>15-inch MacBook Air (M2, 2023)</em></td>
      <td><strong>0.45 (1.15 cm)</strong></td>
      <td><strong>13.40 (34.04)</strong></td>
      <td><strong>9.35 (23.76 cm)</strong></td>
      <td><strong>3.3 lb (1.51 kg)</strong></td>
    </tr>
  </tbody>
</table>

<p>The model closest in overall size is the <strong>2010 13-inch MacBook Air</strong>, and the new <em>15-inch model</em> is 2.3 mm <em>thinner</em> (“Height” in table). The new 15-inch model is basically as thin as the 13-inch MacBook Air, matching Apple’s marketing copy about thinnest 15-inch laptop..</p>

<p>It weighs less than the 14-inch MacBook Pro-s too.</p>

<h2 id="display">Display</h2>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>2010 13-inch MBA</th>
      <th>2020 M1 13-inch MBP</th>
      <th>2022 13-inch MBA</th>
      <th>2023 15-inch MBA</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Display</td>
      <td>13.3 inches</td>
      <td>13.3 inches</td>
      <td>13.6 inches</td>
      <td><em>15.3 inches</em></td>
    </tr>
  </tbody>
</table>

<p>The new 2023 15-inch model manages to fit its Liquid Retina display that is larger diagonally by 2 inches into a chassis that is similar in size to an old 2010 <em>13-inch</em> model. It somehow does this by only adding about 1/2 inch in both width and depth.</p>

<h2 id="model-comparisons-to-new-15-inch-macbook-air">Model Comparisons to new 15-inch MacBook Air</h2>

<h3 id="14-inch-macbook-pro-m2-pro-2023">14-inch MacBook Pro (M2 Pro, 2023)</h3>

<p>The MacBook Pro models for 2023 have been updated with M2-series processors. They squeeze in bigger displays compared to M1 versions. They manage to keep basically identical footprints though. Thinner bezels!</p>

<p>14-inch MacBook Pro (M2 Pro, 2023) <img src="/images/display_14_inch__f298e9zwj02m_large.jpg" alt="14-inch MacBook Pro (M2 Pro, 2023) &lt;&gt;" title="14-inch MacBook Pro (M2 Pro, 2023)" /></p>

<h3 id="13-inch-macbook-pro-2015">13-inch MacBook Pro (2015)</h3>

<p>The MacBook Pro models from 2015 have become a collector’s item since they were prior to the “Touch Bar” and re-designed key switches keyboard found starting in 2016 models that started a dark ages for many MacBook Pro-s users.</p>

<p>13-inch MacBook Pro (2015) <img src="/images/macbook-pro-early-2015-13in-device.jpg" alt="13-inch MacBook Pro (2015) &lt;&gt;" title="13-inch MacBook Pro (2015)" /></p>

<h3 id="13-inch-macbook-air-2010">13-inch MacBook Air (2010)</h3>

<p>The trend-setting 2010 MacBook Air had quite a large bezel around its non-Retina screen, which adds to its chassis dimensions when compared to recent models in the same size class. It was also even thicker, likely due to the packaging and thermal solutions required for its novel (at the time) thinner-package Intel processors.</p>

<p>13-inch MacBook Air (2010) <img src="/images/macbook-air-2009-2010-13in-device.jpg" alt="13-inch MacBook Air (2010) &lt;&gt;" title="13-inch MacBook Air (2010)" /></p>

<h2 id="sources">Sources</h2>

<ul>
  <li><a href="https://support.apple.com/kb/sp618?locale=en_US">MacBook Air (13-inch, Late 2010) - Technical Specifications</a></li>
  <li><a href="https://support.apple.com/kb/sp715?locale=en_US">MacBook Pro (Retina, 13-inch, Early 2015) - Technical Specifications</a></li>
  <li><a href="https://www.apple.com/macbook-pro-14-and-16/specs/">MacBook Pro 14- and 16-inch - Tech Specs - Apple</a></li>
  <li><a href="https://www.apple.com/macbook-air-13-and-15-m2/specs/">MacBook Air 13- and 15-inch with M2 - Tech Specs - Apple</a></li>
  <li><a href="https://www.apple.com/mac/compare/?modelList=MacBookPro-13_1,MacBookPro-13_2,MacBookPro-16">Mac - Compare Models - Apple</a></li>
  <li><a href="https://support.apple.com/en-us/HT201862">Identify your MacBook Air model - Apple Support</a></li>
  <li><a href="https://support.apple.com/en-us/HT201300">Identify your MacBook Pro model - Apple Support</a></li>
</ul>]]></content><author><name></name></author><summary type="html"><![CDATA[Comparing physical dimensions of the new MacBook Air 15-Inch (2023) with select 13-inch and 14-inch MacBook models. Amazing feats performs with shrinking bezels and thin components.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/mba_15_display__cylrbo8pinsm_large.jpg" /><media:content medium="image" url="https://github.crookster.org/images/mba_15_display__cylrbo8pinsm_large.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Update This Blog for Faster macOS Docker Development</title><link href="https://github.crookster.org/update-this-blog-for-faster-macos-docker-development/" rel="alternate" type="text/html" title="Update This Blog for Faster macOS Docker Development" /><published>2022-03-23T00:00:00+00:00</published><updated>2022-03-23T00:00:00+00:00</updated><id>https://github.crookster.org/update-this-blog-for-faster-macos-docker-development</id><content type="html" xml:base="https://github.crookster.org/update-this-blog-for-faster-macos-docker-development/"><![CDATA[<p>Using docker on macOS for <code class="language-plaintext highlighter-rouge">jekyll</code> development has always been slower than other platforms. There are ways to mitigate this, including a recent Docker Desktop update for macOS.</p>

<p>This blog hosted on the GithHub People Pages has been around since 2015. It’s always been in a jekyll blog format that works with the GitHub backend that generates the jekyll blog for hosting. I prefer to run jekyll locally to preview posts as I compose them before publishing.</p>

<p>Long ago I discovered for me that this mean running Docker jekyll container locally. The downside of this had been regenerating the site after each edit would take a long time and then I would have to manually reload the page in the browser once it had been regenerated.</p>

<h2 id="incremental-regeneration-and-livereload">Incremental regeneration and LiveReload</h2>

<p>It turned out to be simple to enable a few features within jekyll to improve the development cycle.</p>

<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code> docker run --rm \
   --volume="$PWD:/srv/jekyll" \
   --volume="$PWD/vendor/bundle:/usr/local/bundle" \
   -it \
   -p 4000:4000 \
<span class="gi">+  -p 35729:35729 \
</span>   jekyll/jekyll:$JEKYLL_VERSION \
<span class="gd">-  jekyll serve --force_polling
</span><span class="gi">+  jekyll serve --incremental --force_polling --livereload
</span></code></pre></div></div>

<p>See <a href="https://github.com/idcrook/idcrook.github.io/blob/main/UsingDockerToServe.md">UsingDockerToServe.md</a> for additional details on how I run a jekyll docker container.</p>

<h4 id="live-reload">Live Reload</h4>

<p>I think the LiveReload feature has been built-in to <code class="language-plaintext highlighter-rouge">jekyll</code> for years, and I am not sure why I never used it. Perhaps it was broken with Docker on Mac or some other similar reason I don’t recall. Anyway, enabling it takes one of the annoying pieces of previewing edits: reloading the page once the regenerated one is ready. It’s an additional command line switch (<code class="language-plaintext highlighter-rouge">--livereload</code>) to <code class="language-plaintext highlighter-rouge">jekyll serve</code> and exposing the port <code class="language-plaintext highlighter-rouge">35729</code> from the container that is used to communicate the live-reload magic with the browser. These changes are highlighted in the <code class="language-plaintext highlighter-rouge">docker run</code> snippet.</p>

<h4 id="incremental">Incremental</h4>

<p>The command line switch <code class="language-plaintext highlighter-rouge">--incremental</code> is more obvious, as the <code class="language-plaintext highlighter-rouge">jekyll serve</code> command itself would message about it while it was starting up. There is a gotcha with it as it doesn’t seem to regenerate the main index pages unless a new post markdown file is added to the directory.</p>

<p>This annoyance can be circumvented by stopping the docker run and re-running it without the incremental switch to re-generate full website. Since pages are editing much more often than they are added, this isn’t too big of a deal, and the incremental way can be the default.</p>

<h2 id="docker-desktop-46-for-mac">Docker Desktop 4.6 for Mac</h2>

<p>One other thing related to macOS Docker performance is the <code class="language-plaintext highlighter-rouge">VirtioFS</code> layer added very recently to Docker Desktop for Mac. It’s still classified as <em>Experimental</em> but it promises great improvements for file consistency across host/container files. Enabling it on an Intel silicon Mac requires macOS 12.3, so it’s all quite fresh.</p>

<p><img src="/images/enabling-virtiofs-macos.png" alt="enabling Virtiofs &lt;&gt;" title="Instructions for enabling Virtiofs in Docker Desktop" /></p>

<p>Via <a href="https://docs.docker.com/desktop/mac/#experimental-features">https://docs.docker.com/desktop/mac/#experimental-features</a>: “Enable VirtioFS”</p>

<p><a href="https://www.docker.com/blog/speed-boost-achievement-unlocked-on-docker-desktop-4-6-for-mac/">Speed boost achievement unlocked on Docker Desktop 4.6 for Mac - Docker</a></p>

<h2 id="results">Results</h2>

<p>I don’t have very good controls for before/after data, but it has been a greatly perceptible improvement. This used to take around 20 seconds on my 2015 13-inch MBP. Now it is more like 5 seconds.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Regenerating: 1 file(s) changed at 2022-03-23 21:03:54
                    _posts/2022-03-23-update-this-blog-for-faster-macos-docker-development.md
       Jekyll Feed: Generating feed for posts
                    ...done in 4.390487499 seconds.
</code></pre></div></div>]]></content><author><name></name></author><summary type="html"><![CDATA[Using docker on macOS for jekyll development has always been slower than other platforms. There are ways to mitigate this, including a recent Docker Desktop update for macOS.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/enabling-virtiofs-macos.png" /><media:content medium="image" url="https://github.crookster.org/images/enabling-virtiofs-macos.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Comparing Physical Dimensions of MacBook Pro 14-Inch (2021)</title><link href="https://github.crookster.org/comparing-physical-dimensions-of-macbook-pro-14-inch-(2021)/" rel="alternate" type="text/html" title="Comparing Physical Dimensions of MacBook Pro 14-Inch (2021)" /><published>2021-10-19T00:00:00+00:00</published><updated>2021-10-19T00:00:00+00:00</updated><id>https://github.crookster.org/comparing-physical-dimensions-of-macbook-pro-14-inch-(2021)</id><content type="html" xml:base="https://github.crookster.org/comparing-physical-dimensions-of-macbook-pro-14-inch-(2021)/"><![CDATA[<p>Comparing physical dimensions of the new MacBook Pro 14-Inch (2021) with other select 13-inch Mac models.</p>

<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->

<ul>
  <li><a href="#dimensions">Dimensions</a></li>
  <li><a href="#display">Display</a></li>
  <li><a href="#model-comparisons-to-new-14-inch-macbook-pro">Model Comparisons to new 14-inch MacBook Pro</a>
    <ul>
      <li><a href="#13-inch-macbook-pro-m1-2020">13-inch MacBook Pro (M1, 2020)</a></li>
      <li><a href="#13-inch-macbook-pro-2015">13-inch MacBook Pro (2015)</a></li>
      <li><a href="#13-inch-macbook-air-2010">13-inch MacBook Air (2010)</a></li>
    </ul>
  </li>
  <li><a href="#16-inch-macbook-pro-2021-versus-17-inch-macbook-pro-early-2008">16-inch MacBook Pro (2021) versus 17-inch MacBook Pro (Early 2008)</a></li>
  <li><a href="#sources">Sources</a></li>
</ul>

<!-- markdown-toc end -->

<h2 id="dimensions">Dimensions</h2>

<p>MacBook Pro 14-Inch (2021) <img src="/images/97969f70-303a-11ec-9fff-50d91433effd.cf.jpg" alt="MacBook Pro 14-inch (2021) image &lt;&gt;" title="MacBook Pro 14-inch (2021) image" /></p>

<p>Table comparing dimensions with nearest to new 14-inch MacBook Pro <strong>bolded</strong>:</p>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>Height</th>
      <th>Width</th>
      <th>Depth</th>
      <th>Weight</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>13-inch MacBook Air (2010)</td>
      <td>0.68 (1.7 cm)</td>
      <td>12.8 (32.5 cm)</td>
      <td>8.94 (22.7 cm)</td>
      <td>2.9 lbs (1.32 kg)</td>
    </tr>
    <tr>
      <td>13-inch MacBook Pro (Early 2015)</td>
      <td>0.71 (1.8 cm)</td>
      <td><strong>12.53 (31.4 cm)</strong></td>
      <td><strong>8.62 (21.9 cm)</strong></td>
      <td><strong>3.48 lbs (1.58 kg)</strong></td>
    </tr>
    <tr>
      <td>13-inch MacBook Pro (M1, 2020)</td>
      <td><strong>0.61 (1.56 cm)</strong></td>
      <td>11.97 (30.41 cm)</td>
      <td>8.36 (21.24 cm)</td>
      <td>3.0 lbs (1.4 kg)</td>
    </tr>
    <tr>
      <td><em>14-inch MacBook Pro (2021)</em></td>
      <td><strong>0.61 (1.55 cm)</strong></td>
      <td><strong>12.31 (31.26 cm)</strong></td>
      <td><strong>8.71 (22.12 cm)</strong></td>
      <td><strong>3.5 lb (1.6 kg)</strong></td>
    </tr>
  </tbody>
</table>

<p>The <em>closest model by overall size and weight</em> is the <strong>2015 13-inch MacBook Pro</strong>, and the new 14-inch model is 2.5 mm <em>thinner</em> (“Height” in table) than it.</p>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>Height</th>
      <th>Width</th>
      <th>Depth</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>13-inch MacBook Air (2010)</td>
      <td>+1.5 mm</td>
      <td>+12.4 mm</td>
      <td>+5.8 mm</td>
    </tr>
    <tr>
      <td>13-inch MacBook Pro (Early 2015)</td>
      <td>+2.5 mm</td>
      <td>+1.4 mm</td>
      <td>-2.2 mm</td>
    </tr>
    <tr>
      <td>13-inch MacBook Pro (M1, 2020)</td>
      <td>+0.1 mm</td>
      <td>-8.5 mm</td>
      <td>-8.8 mm</td>
    </tr>
    <tr>
      <td><em>14-inch MacBook Pro (2021)</em></td>
      <td>–</td>
      <td>–</td>
      <td>–</td>
    </tr>
  </tbody>
</table>

<p>In fact the listed <em>Height</em> (laptop thickness), <em>Width</em> (chassis along display width) and <em>Depth</em> (chassis along display height) for the 2015 MBP are all within <strong>2.5 mm</strong> differences, which is less than a tenth of an inch. Many cases or sleeves or backpacks that could accomodate a 2015-era 13-inch MBP should handle the new 2021 14-inch MBP without any issues.</p>

<p>And the weight of the new 2021 14-inch model is also the same as the 2015 13-inch model:</p>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>Weight</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>13-inch MacBook Air (2010)</td>
      <td>-0.6 lb</td>
    </tr>
    <tr>
      <td>13-inch MacBook Pro (Early 2015)</td>
      <td>-0.02 lb</td>
    </tr>
    <tr>
      <td>13-inch MacBook Pro (M1, 2020)</td>
      <td>-0.5 lb</td>
    </tr>
    <tr>
      <td><em>14-inch MacBook Pro (2021)</em></td>
      <td>–</td>
    </tr>
  </tbody>
</table>

<h2 id="display">Display</h2>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>2010 13-inch MBA</th>
      <th>2015 13-inch MBP</th>
      <th>2020 M1 13-inch MBP</th>
      <th>2021 14-inch MBP</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Display</td>
      <td>13.3 inches</td>
      <td>13.3 inches</td>
      <td>13.3 inches</td>
      <td><strong>14.2 inches</strong></td>
    </tr>
  </tbody>
</table>

<p>The new 2021 14-inch model manages to fit its Liquid Retina display that is larger diagonally by almost a full inch (0.9 inch) into a chassis that is similar in size to older <em>13-inch</em> models. Have you heard the good news of <strong>“The Notch”</strong>?</p>

<h2 id="model-comparisons-to-new-14-inch-macbook-pro">Model Comparisons to new 14-inch MacBook Pro</h2>

<h3 id="13-inch-macbook-pro-m1-2020">13-inch MacBook Pro (M1, 2020)</h3>

<p>The M1 13-inch MacBook Pro from 2020 is as thin as the new 14-inch model, but its smaller overall display width allows its chassis to be ~8.5 mm less wide and tall vs. the 14-inch model.</p>

<p>13-inch MacBook Pro (M1, 2020) <img src="/images/71gD8WdSlaL._AC_SL1500__sm1.jpg" alt="13-inch MacBook Pro (M1, 2020) &lt;&gt;" title="13-inch MacBook Pro (M1, 2020)" /></p>

<h3 id="13-inch-macbook-pro-2015">13-inch MacBook Pro (2015)</h3>

<p>The MacBook Pro models from 2015 have become a collector’s item since they were prior to the “Touch Bar” and re-designed key switches keyboard found starting in 2016 models that started a dark ages for many MacBook Pro-s users. As it happens, its chassis has about the same footprint (for both size and weight) of the new 2021 14-inch MacBook Pro.</p>

<p>13-inch MacBook Pro (2015) <img src="/images/macbook-pro-early-2015-13in-device.jpg" alt="13-inch MacBook Pro (2015) &lt;&gt;" title="13-inch MacBook Pro (2015)" /></p>

<h3 id="13-inch-macbook-air-2010">13-inch MacBook Air (2010)</h3>

<p>The trend-setting 2010 MacBook Air had quite a large bezel around its non-Retina screen, which adds to its chassis dimensions when compared to recent models in the same size class. It was also even thicker, likely due to the packaging and thermal solutions required for its novel (at the time) thinner-package Intel processors.</p>

<p>13-inch MacBook Air (2010) <img src="/images/macbook-air-2009-2010-13in-device.jpg" alt="13-inch MacBook Air (2010) &lt;&gt;" title="13-inch MacBook Air (2010)" /></p>

<h2 id="16-inch-macbook-pro-2021-versus-17-inch-macbook-pro-early-2008">16-inch MacBook Pro (2021) versus 17-inch MacBook Pro (Early 2008)</h2>

<p>As a curious point of reference, there were once 17-inch MacBook Pro-s sold. And their feet were visible in product shots. Its highest resolution display option was “17-inch (diagonal) antiglare TFT widescreen LED backlit display with support for millions of colors” with supported resolutions: “1920 by 1200 (native)”.</p>

<p>MacBook Pro (17-inch, Early 2008) <img src="/images/macbook-pro-early-2008-17in-device.jpg" alt="MacBook Pro (17-inch, Early 2008) &lt;&gt;" title="MacBook Pro (17-inch, Early 2008)" /></p>

<table>
  <thead>
    <tr>
      <th>Dimension</th>
      <th>Height</th>
      <th>Width</th>
      <th>Depth</th>
      <th>Weight</th>
      <th>Display</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>MacBook Pro (17-inch, Early 2008)</td>
      <td>1.0 inch (2.59 cm)</td>
      <td>15.4 inches (39.2 cm)</td>
      <td>10.4 inches (26.5 cm)</td>
      <td>6.8 pounds (3.08 kg)</td>
      <td>17.0 inches</td>
    </tr>
    <tr>
      <td>MacBook Pro (16-inch, 2021)</td>
      <td>0.66 (1.68 cm)</td>
      <td>14.01 (35.57 cm)</td>
      <td>9.77 (24.81 cm)</td>
      <td>4.7 lbs (2.1 kg)</td>
      <td>16.2 inches</td>
    </tr>
  </tbody>
</table>

<p>The new 16-inch MacBook Pro trims off 2.1 pounds (1.0 kg) – over 30% lighter – compared to 2008 17-inch model.</p>

<p>And it’s close to one-and-a-half inches less wide along the display width, over half-an-inch less along display height direction, while coming within 0.8 inch of the thirteen-years older 17-inch MacBook Pro’s display active area.</p>

<p>And it’s over 30% thinner (but alas, without a DVD drive built-in).</p>

<h2 id="sources">Sources</h2>

<ul>
  <li><a href="https://support.apple.com/kb/sp618?locale=en_US">MacBook Air (13-inch, Late 2010) - Technical Specifications</a></li>
  <li><a href="https://support.apple.com/kb/sp715?locale=en_US">MacBook Pro (Retina, 13-inch, Early 2015) - Technical Specifications</a></li>
  <li><a href="https://www.apple.com/macbook-pro-14-and-16/specs/">MacBook Pro 14- and 16-inch - Technical Specifications - Apple</a></li>
  <li><a href="https://www.apple.com/mac/compare/?modelList=MacBookPro-13_1,MacBookPro-13_2,MacBookPro-16">Mac - Compare Models - Apple</a></li>
  <li><a href="https://support.apple.com/en-us/HT201862">Identify your MacBook Air model - Apple Support</a></li>
  <li><a href="https://support.apple.com/en-us/HT201300">Identify your MacBook Pro model - Apple Support</a></li>
  <li><a href="https://support.apple.com/kb/SP4?locale=en_US">MacBook Pro (Early 2008 and Late 2008) - Technical Specifications</a></li>
</ul>]]></content><author><name></name></author><summary type="html"><![CDATA[Comparing physical dimensions of the new MacBook Pro 14-Inch (2021) with other select 13-inch Mac models.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/compare_macbook_pro_14_spacegray__gd16wbpyw7ee_large.jpg" /><media:content medium="image" url="https://github.crookster.org/images/compare_macbook_pro_14_spacegray__gd16wbpyw7ee_large.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Secure Shellfish (Plus Textastic) on iPad Gets the Jobs Done</title><link href="https://github.crookster.org/secure-shellfish-plus-textastic-on-ipad-gets-the-jobs-done/" rel="alternate" type="text/html" title="Secure Shellfish (Plus Textastic) on iPad Gets the Jobs Done" /><published>2021-03-12T00:00:00+00:00</published><updated>2021-03-12T00:00:00+00:00</updated><id>https://github.crookster.org/secure-shellfish-plus-textastic-on-ipad-gets-the-jobs-done</id><content type="html" xml:base="https://github.crookster.org/secure-shellfish-plus-textastic-on-ipad-gets-the-jobs-done/"><![CDATA[<p><a href="https://secureshellfish.app">Secure ShellFish</a> is an iOS/iPadOS terminal + file sharing app by <a href="https://apps.apple.com/us/app/secure-shellfish-ssh-client/id1336634154">Anders Borum</a>. It has a few interesting features that make it a compelling tool to put in your iOS toolbox for  SSH-backed work.</p>

<h2 id="terminal-clients-on-ios">Terminal clients on iOS</h2>

<p>I have been using <a href="https://blink.sh">Blink.sh</a> on iOS for some time. It has improved hardware keyboard support and natively supports <code class="language-plaintext highlighter-rouge">mosh</code> mobile shell, which is handy for flaky SSH sessions. Originally using it built from source (see its <a href="https://github.com/blinksh/blink">GitHub repo</a>), I eventually purchased Blink.sh from the AppStore.  It is reliable, but recently some of its limitations came to light, such as some native iOS integrations.</p>

<p>The rest of this post is about exploring <em>Secure Shellfish</em> as a terminal (SSH client) and its all-around iOS network and file-system capabilities.</p>

<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->
<p><strong>Table of Contents</strong></p>

<ul>
  <li><a href="#important-features-of-secure-shellfish">Important Features of Secure ShellFish</a></li>
  <li><a href="#demonstrating-a-workflow-including-screenshots">Demonstrating a Workflow, Including Screenshots</a>
    <ul>
      <li><a href="#open-the-remote-file-directly-in-ios-editor-textastic">Open the (Remote) File Directly in iOS Editor (Textastic)</a></li>
      <li><a href="#re-generate-the-blog-site">(Re-)Generate the Blog Site</a></li>
      <li><a href="#previewing-edits-to-the-blog-post">Previewing Edits to the Blog Post</a></li>
    </ul>
  </li>
  <li><a href="#conclusion">Conclusion</a></li>
</ul>

<!-- markdown-toc end -->

<h2 id="important-features-of-secure-shellfish">Important Features of Secure ShellFish</h2>

<p>Here are a couple of the things that Secure Shellfish can do to make “Terminal Life” better on iOS:</p>

<dl>
  <dt>Files.app compatibility</dt>
  <dd>Server directories appear in the Files app. Modern iOS apps can open files and directories in-place.  This is a <strong>HUGE</strong> feature over a typical SSH terminal app.</dd>
  <dt>iOS integration with your shell</dt>
  <dd>As examples, from at a shell prompt in its terminal, you can access <a href="https://twitter.com/ShellFishApp/status/1174740782479290371?s=20">Shortcuts</a>, open webpages, and copy to pasteboard, all coordinating with native iOS apps or features.</dd>
</dl>

<p><em>Secure Shellfish</em> has many other features that improve the iOS terminal experience. There’s a way to always connect to a <code class="language-plaintext highlighter-rouge">tmux</code> session in a terminal startup configuration. Or iCloud can be used to sync SSH keys.
Edit a file on the server using <a href="https://apps.apple.com/us/app/id1049254261?mt=8">‎Textastic Code Editor 9 on the App Store</a>. And, it can be given permission to keep SSH sessions open in the “background”, meaning it will keep an SSH session running even if you switch away from its app on iOS.</p>

<h2 id="demonstrating-a-workflow-including-screenshots">Demonstrating a Workflow, Including Screenshots</h2>

<p>I have this blog as a jekyll-based GitHub (User) Pages <a href="https://github.com/idcrook/idcrook.github.io">repository on GitHub</a>. Adding a new post involves adding a markdown file which contains the content of the post.</p>

<p>To preview edits as they are being made to the blog post, I can run <code class="language-plaintext highlighter-rouge">jekyll</code> as a server locally. For this demonstration, a Ubuntu virtual machine running on a QNAP NAS which lives on my home LAN is our “server”.  The VM is configured to have an SSH server (which is how <em>Secure ShellFish</em> does most of its magic). It also has a container runtime (Docker) installed, which makes it convenient to run <code class="language-plaintext highlighter-rouge">jekyll</code> from a container.</p>

<p>Following is how this blog post was made by leveraging <em>Secure ShellFish</em> on an iPad.</p>

<h3 id="open-the-remote-file-directly-in-ios-editor-textastic">Open the (Remote) File Directly in iOS Editor (Textastic)</h3>

<p>First, I opened the markdown file I wanted to edit. The following command opens the file (which resides on SSH server) in the Textastic app. The file I was editing is actually the markdown file for this post you are reading!</p>

<p>Installing its officially supported shell integration (for BASH, zsh, etc.) provides a built-in <code class="language-plaintext highlighter-rouge">textastic &lt;&lt;FILE&gt;&gt;</code> that works to open a file (via the SSH session, directly for a file on the server) in Textastic editor, and all right at a shell prompt in ssh terminal.</p>

<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>textastic _posts/2021-03-10-secure-shellfish-plus-textastic-on-ipad-gets-the-jobs-done.md
</code></pre></div></div>

<p>For <code class="language-plaintext highlighter-rouge">textastic</code> command to work, it relies on installing <em>Secure Shellfish’s</em> native shell integration. “Installing” involves sourcing its <a href="https://github.com/idcrook/i-dotfiles/blob/main/shell/.shellfishrc">shell helper file</a> from shell startup files. The app even provides a way to automatically detect your shell and install its helper file from within an active ssh terminal session.</p>

<h3 id="re-generate-the-blog-site">(Re-)Generate the Blog Site</h3>

<p>After opening the file to edit, I wanted to run Jekyll on the server, so that I could preview edits that were being made to the file. This was done using a <code class="language-plaintext highlighter-rouge">jekyll</code> docker image, and I started the docker container interactively.</p>

<p>Using the same terminal shell session that was used to launch iOS editor, the running <code class="language-plaintext highlighter-rouge">jekyll</code> server polls the filesystem to detect changes and then regenerates the blog. This (docker container) runs on the same virtual machine where we are (1) SSH-ed into for our terminal and (2) which has been exposed to iOS (by <em>Secure ShellFish</em>) the file we are editting.</p>

<p><img src="/images/shellfish_textastic_SBS.PNG" alt="Shellfish Terminal next to Textastic" /></p>

<p>If you look closely, you may even notice the <code class="language-plaintext highlighter-rouge">tmux</code> bottom bar in the terminal screen. I configured the server connection in <em>Secure ShellFish</em> this way. This means even if we lose connection to SSH session, the docker container will continue running and we can re-attach to same <code class="language-plaintext highlighter-rouge">tmux</code> session later.</p>

<h3 id="previewing-edits-to-the-blog-post">Previewing Edits to the Blog Post</h3>

<p>I was able to use iPadOS multi-tasking to swap in <code class="language-plaintext highlighter-rouge">Safari.app</code> to view a previews of the regenerated pages after edits were made.  This is as simple as opening the web page the <code class="language-plaintext highlighter-rouge">jekyll</code> server was hosting on the VM on my local network.</p>

<p><img src="/images/safariPreview_textastic_SBS.PNG" alt="Preview in Safari next to Textastic" /></p>

<p>Note that even though <em>Secure ShellFish</em> was not a foreground application, it was still running <code class="language-plaintext highlighter-rouge">jekyll</code> in its SSH session (using its background running feature).</p>

<h2 id="conclusion">Conclusion</h2>

<p>I actually had purchased the Pro version of <em>Secure ShellFish</em> some time ago, but had usually used Blink.sh and before that <a href="https://apps.apple.com/us/app/prompt-2/id917437289?ign-mpt=uo%3D4">‎Prompt 2 on the App Store</a> as my “iOS SSH app”. So, this exploration of Secure ShellFish was a recent endeavor.</p>

<p>I was pleased to find <em>Secure ShellFish</em>’s tool chest of capabilities and features that enabled some seamless workflows that I hadn’t realized were even possible on iOS.  I’d recommend you too try out <em>Secure ShellFish</em>.</p>

<p>I am also a fan of another of <a href="https://apps.apple.com/us/developer/anders-borum/id343532883">Anders Borum</a>’s apps, the excellent <a href="https://apps.apple.com/us/app/working-copy/id896694807?ign-mpt=uo%3D6">‎Working Copy - Git client on the App Store</a>.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Secure ShellFish is an iOS/iPadOS terminal + file sharing app by Anders Borum. It has a few interesting features that make it a compelling tool to put in your iOS toolbox for SSH-backed work.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://github.crookster.org/images/shellfish_textastic_SBS.PNG" /><media:content medium="image" url="https://github.crookster.org/images/shellfish_textastic_SBS.PNG" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>