<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Greg Wilson's Tech Blog]]></title><description><![CDATA[I lead the AWS Documentation and SDK/CLI teams but I blog about other interests too. The views and opinions expressed on this site are my own and do not reflect]]></description><link>https://gregwilson.tech</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 05:24:58 GMT</lastBuildDate><atom:link href="https://gregwilson.tech/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Every known way to reboot or reset a 2025 Volvo EX90]]></title><description><![CDATA[The 2025 Volvo EX90 is a brand new platform, and like any new platform, there are some issues. I’ve collected all of the ways that various systems can be rebooted or reset. When something goes wrong—infotainment glitches, key errors, faults, or conne...]]></description><link>https://gregwilson.tech/reboot-volvo-ex90</link><guid isPermaLink="true">https://gregwilson.tech/reboot-volvo-ex90</guid><category><![CDATA[ex90]]></category><category><![CDATA[Volvo]]></category><category><![CDATA[volvo ex90 cheat sheet]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Mon, 12 May 2025 06:22:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747028856681/d69bf1ae-18b0-4c07-9622-66222c963a52.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The 2025 Volvo EX90 is a brand new platform, and like any new platform, there are some issues. I’ve collected all of the ways that various systems can be rebooted or reset. When something goes wrong—infotainment glitches, key errors, faults, or connectivity problems—a reset or reboot often solves it. Below is a list of everything I’ve learned so far. As of this writing, the current Volvo EX90 software version is 1.2.15.</p>
<p>I recently made a printable <a target="_blank" href="https://drive.google.com/file/d/1tCiwe4gW7EPAJFNtzOzSaEx8gVbUG6wS/view?usp=sharing">Volvo EX90 <strong>Cheat Sheet</strong> version of these procedures</a> for easy reference.</p>
<hr />
<h2 id="heading-reboot-the-center-display-infotainment"><strong>Reboot the Center Display (Infotainment)</strong></h2>
<p>If the screen is frozen, apps are misbehaving, or the system is laggy:</p>
<ul>
<li><p>Make sure the car is parked and <strong>not charging</strong></p>
</li>
<li><p>Press and hold the <strong>play/pause button</strong> below the center screen</p>
</li>
<li><p>Keep holding until the screen goes black and the Volvo logo appears (this can take up to 40 seconds).</p>
</li>
</ul>
<hr />
<h2 id="heading-reset-the-ultra-wideband-uwb-module-fix-digital-key-issues"><strong>Reset the Ultra-Wideband (UWB) Module (fix digital key issues)</strong></h2>
<p>The UWB module is responsible for short-range communication used in digital key functionality. If you are unable to add a digital key, or if your iPhone or Android phone fails to lock/unlock the car reliably, or the car doesn’t recognize your proximity, resetting the UWB module can often help. This one is weird but seems to fix most digital key issues:</p>
<ul>
<li><p>Unlock the car and exit - don’t lock it. Stand outside near the driver’s door</p>
</li>
<li><p>If you use your phone as a digital key, turn off bluetooth or move your phone away from the car. Also move any key tags away from the car</p>
</li>
<li><p>Tap your finger on the indent on the driver’s door handle and hold for 1 second, then lift your finger</p>
</li>
<li><p>Repeat this <em>four times</em> (total of 5 1-second taps)</p>
</li>
<li><p>It will feel useless because there is no response from the car, but it works</p>
</li>
<li><p>Leave the car untouched for 20 minutes. Do not interact with the vehicle during this time (no door handles, no app interactions, no key fob nearby)</p>
</li>
</ul>
<p>This allows the UWB system to go into standby and reset its internal state. There is no visual confirmation (like a flash or chime) that the reset worked, so it seems like nothing happened, but it fixed my problems with adding a digital key. A few days later, the car stopped acknowledging my phone and required me to use the keycard to drive the car. I did the above procedure and 20 minutes later, it all started working again.</p>
<p>Note - this does not erase your digital key or phone pairing; it only resets the radio communication module.</p>
<hr />
<h2 id="heading-central-computer-reset-full-system-reboot"><strong>Central Computer Reset (Full System Reboot)</strong></h2>
<p>The HPA is the EX90’s central computing cluster. It handles essential systems including drive functions, ADAS (driver assistance), infotainment communication, and vehicle coordination. If you're seeing odd warnings like “Propulsion System Failure”, “Driver assistance fault”, a blank or frozen screens, or persistent software errors, an HPA reset forces a deeper system-level reboot than a standard display restart. Note that this can only be done once per 24 hours (weird).</p>
<ul>
<li><p>Park the vehicle and keep it on (Ready mode)</p>
</li>
<li><p>Confirm that the charger is <strong>not</strong> connected</p>
</li>
<li><p>Press and hold the <strong>brake pedal</strong></p>
</li>
<li><p>At the same time, press and hold both:</p>
<ul>
<li><p><strong>“-”</strong> (minus button) on the left side of the steering wheel</p>
</li>
<li><p><strong>Volume Up</strong> on the right side</p>
</li>
</ul>
</li>
<li><p>Keep holding until you see the message “<em>System Communication Error</em>” on the instrument panel (it will take 10-20 seconds)</p>
</li>
<li><p>Now, press and hold the <strong>PARK</strong> button (near the gear selector) for <strong>2 seconds</strong> to complete the reset. (There is no immediate feedback other than the screens dim slightly, but after about 60 seconds, things will reset)</p>
</li>
<li><p>Release everything and wait around 60 seconds for systems to stabilize</p>
</li>
<li><p>The car will likely show a lot of errors as things come back online. After things settle, you might still have error indicators, but don’t panic yet. Get out of the car, walk away (allows the car to power down), and come back in a few mins and things should be good to go</p>
</li>
</ul>
<p>Note that this does not erase data or require re-pairing of devices.</p>
<p>Oddly, you can only do this once per 24 hour period.</p>
<p>Recently, I started getting an error about the driver assistant system having a fault and all of the driver’s assistance features stopped working. I did the reboot above and it fixed it. As noted above, after the reboot, the fault indicator light was still on (along with some other errors), but it corrected itself after I got out of the car for a few minutes so it could power off.</p>
<p>On another day, the HUD stopped working completely. This procedure fixed it.</p>
<p><a target="_blank" href="https://www.volvocars.com/au/support/topic/427778da438fd5b4c0a801514f24aac7/">Volvo’s docs on this</a> provide some additional context.</p>
<hr />
<h2 id="heading-factory-reset-erase-all-data"><strong>Factory Reset (Erase All Data)</strong></h2>
<p>To wipe all personal data, settings, keys, and preferences:</p>
<ul>
<li><p>Tap the car icon at the bottom of the center screen</p>
</li>
<li><p>Go to <strong>Settings &gt; System &gt; Reset options &gt; Erase all data (factory reset)</strong></p>
</li>
<li><p>Only the <em>owner profile</em> can perform this action</p>
</li>
</ul>
<p>This will remove user profiles, paired phones, apps you’ve installed, digital keys, and preferences. Most issues can be fixed without doing a factory reset.</p>
<hr />
<h2 id="heading-restore-google-maps-andor-google-assistant-if-missing">Restore Google Maps and/or Google Assistant if Missing</h2>
<p>Earlier today, as my wife and I were switching seats and profiles so she could drive, we noticed that the home screen had no map. It was just a large blank space on the screen. I went to the apps and noticed that both Google Maps and Google Assistant were missing! I did a bit of research and found a quick fix:</p>
<ul>
<li><p>Go to <strong>Settings</strong> &gt; <strong>System</strong> &gt; <strong>Applications</strong> &gt; <strong>Show all apps</strong></p>
</li>
<li><p>Scroll down to <strong>Google Maps</strong>, tap it, and then <strong>enable</strong> it</p>
</li>
<li><p>Repeat for the <strong>Google Assistant</strong> app</p>
</li>
</ul>
<p>After this, you can run the apps and everything will be normal again.</p>
<p>I have no idea why these were suddenly disabled, but I suspect it’s something to do with Apple CarPlay being active with navigation as we switched profiles.</p>
<hr />
<h2 id="heading-fix-missing-speed-limit-indicator"><strong>Fix missing speed limit indicator</strong></h2>
<p>If the speed limit has stopped showing on the driver’s display,, this will fix it:</p>
<ul>
<li><p>Go to <strong>Settings</strong> &gt; <strong>System</strong> &gt; <strong>Applications</strong> &gt; <strong>Show all apps</strong></p>
</li>
<li><p>Scroll down to <strong>Google Maps</strong>, tap it, then tap <strong>Storage &amp; cache</strong></p>
</li>
<li><p>Tap <strong>Clear storage</strong> and <strong>Clear cache</strong></p>
</li>
<li><p>You will likely have to re-login to Google Maps</p>
</li>
</ul>
<hr />
<p>Some resets (like the factory reset) require being logged in as the vehicle’s owner.</p>
<p>There is a good <a target="_blank" href="https://www.reddit.com/r/VolvoEX90/comments/1jcelg1/restarting_various_systems_in_the_ex90/#:~:text=Ultra%20Wideband%20\(UWB\)%20Module,for%20the%20module%20to%20reset.">Reddit thread</a> on resetting things in the Volvo EX90.</p>
<p>If you learn of any other ways to reboot/reset, please comment below so I can keep this up to date.</p>
]]></content:encoded></item><item><title><![CDATA[The Acceleration Obsession with Cars]]></title><description><![CDATA[I feel like we live in an age of automotive craziness. When I was a teenager, any car that could accelerate from 0 to 60 in under 6 seconds was considered extremely “sporty” (and out of reach). We’ve come a long ways — I recently had the opportunity ...]]></description><link>https://gregwilson.tech/the-acceleration-obsession-with-cars</link><guid isPermaLink="true">https://gregwilson.tech/the-acceleration-obsession-with-cars</guid><category><![CDATA[McMurtry]]></category><category><![CDATA[Spéirling]]></category><category><![CDATA[cars]]></category><category><![CDATA[Sports Cars]]></category><category><![CDATA[acceleration]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Mon, 21 Apr 2025 02:47:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745200892416/6ff01817-e43d-4797-961d-f4f32b4d6ffc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I feel like we live in an age of automotive craziness. When I was a teenager, any car that could accelerate from 0 to 60 in under 6 seconds was considered extremely “sporty” (and out of reach). We’ve come a long ways — I recently had the opportunity to drive a car that could accelerate from 0 to 60 mph in just 2.3 seconds, and wow—it was both exhilarating and bit unsettling. Many of my passengers would become physically ill when I demonstrated this capability. It got me thinking: what are the practical limits of acceleration in production cars, and are we approaching them?</p>
<h2 id="heading-the-human-experience-of-extreme-acceleration">The Human Experience of Extreme Acceleration</h2>
<p>I’ve always been drawn to cars, roller coasters, and airplanes that can accelerate quickly. The most intense acceleration I've ever experienced was 3.2g in a Decathlon airplane doing a loop during an aerobatic lesson, and it felt like my absolute upper limit. At that point, I could feel the blood struggling to reach my brain, my vision starting to narrow, and my chest compressed by an invisible elephant.</p>
<p>To put this in perspective, when a car accelerates from 0-60 mph in 2.3 seconds, you're experiencing around 1.2g of force—enough to make most passengers uncomfortable. At 2.0 seconds, you're approaching 1.4g. Professional drivers can handle this repeatedly, but for the average person, it's an intense experience that the body isn't accustomed to.</p>
<h2 id="heading-the-physical-limits-tires-traction-and-frames">The Physical Limits: Tires, Traction, and Frames</h2>
<p>The acceleration we're seeing in modern hypercars isn't just challenging for human bodies—it's pushing the limits of mechanical engineering in several ways:</p>
<h3 id="heading-tire-traction">Tire Traction</h3>
<p>The fundamental limit for acceleration is ultimately traction. Modern performance cars produce enough power that they could easily spin their wheels without moving forward if not for advanced traction control systems. Even with the stickiest street-legal compounds, conventional tires on pavement have a coefficient of friction that maxes out around 1.5. This means that without additional downforce, the theoretical limit for acceleration would be around 1.5g or roughly 1.8 seconds from 0-60 mph.</p>
<h3 id="heading-vehicle-structure">Vehicle Structure</h3>
<p>At extreme acceleration levels, the forces exerted on a car's frame are enormous. Every component experiences stress as the vehicle launches forward. For example, drivetrain mounts must withstand thousands of pounds of force without flexing. This is why hypercars often feature carbon fiber monocoques and intricate internal structures—they need to maintain rigidity under these extreme conditions.</p>
<h2 id="heading-the-mcmurtry-speirling-defying-conventional-limits">The McMurtry Spéirling: Defying Conventional Limits</h2>
<p>The most remarkable advancement in acceleration technology today comes from a small British company called <a target="_blank" href="https://mcmurtry.com/">McMurtry Automotive</a>. Their Spéirling hypercar has shattered acceleration records with its innovative approach.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745201722577/55380451-a80a-469d-9c07-aa3a55186357.jpeg" alt="McMurtry Spéirling side view" class="image--center mx-auto" /></p>
<p>According to McMurtry, the Spéirling can zoom from 0-to-60 mph in just <strong>1.4 seconds</strong> and cover a quarter-mile in 7.97 seconds. This beats previous production vehicle records by significant margins.</p>
<p>How is this possible? The key innovation is the car's "Downforce-on-Demand" fan system:</p>
<p>The McMurtry Spéirling's unique performance differentiator is its fan-powered downforce system, producing 2,000 kg (4,400 lb) of downforce at a standstill. This incredible technology means the car has massive grip before it even starts moving.</p>
<p>Behind the cockpit are two fans that spin up to 23,000rpm to suck air out from underneath the car, creating more than two tonnes of downforce from standstill—enough so it could theoretically stick to the ceiling with the fans at full speed.</p>
<p>And in fact, it recently did exactly that! In April 2025, McMurtry demonstrated this capability by having the Spéirling drive upside down. The car drove up a ramp onto a metal platform that then rotated 180 degrees, with the fans keeping the car firmly attached to the surface.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=g6LYcgaQ46c">https://www.youtube.com/watch?v=g6LYcgaQ46c</a></div>
<p> </p>
<p>This downforce-on-demand system fundamentally changes the acceleration equation. With over two tons of downforce pushing the car into the ground, the effective traction far exceeds what would be possible with just the car's weight and conventional tires. I have to admit, when I saw this video, I immediately wondered what would happen if you made the fans blow the other way! :-)</p>
<h2 id="heading-other-acceleration-monsters">Other Acceleration Monsters</h2>
<p>While the McMurtry Spéirling currently holds the crown, several other production cars offer mind-bending acceleration:</p>
<ol>
<li><p><strong>Rimac Nevera</strong>: 0-60 mph in 1.9 seconds</p>
</li>
<li><p><strong>Tesla Model S Plaid</strong>: 0-60 mph in 1.99 seconds</p>
</li>
<li><p><strong>Bugatti Chiron Super Sport</strong>: 0-60 mph in 2.3 seconds</p>
</li>
<li><p><strong>Pininfarina Battista</strong>: 0-60 mph in 1.79 seconds</p>
</li>
<li><p><strong>Aspark Owl</strong>: 0-60 mph in 1.69 seconds</p>
</li>
</ol>
<p>What's notable is that most of these ultra-quick cars are electric. The instant torque delivery of electric motors gives them a significant advantage for off-the-line acceleration.</p>
<p>Marques Brownlee got to drive the Rimac Nevera last year. It’s a must watch. He accelerates at the 3 minute mark.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=sF9xYtouZjY">https://www.youtube.com/watch?v=sF9xYtouZjY</a></div>
<p> </p>
<h2 id="heading-a-historical-perspective-the-evolution-of-acceleration">A Historical Perspective: The Evolution of Acceleration</h2>
<p>To appreciate how far we've come, let's look back at the acceleration kings of previous decades:</p>
<h3 id="heading-1970s">1970s</h3>
<p>The 1970s were the tail end of the original muscle car era, but emissions regulations and the oil crisis had begun to strangle performance. The fastest accelerating cars of this era included the Ferrari 512 BB and the Lamborghini Miura P400SV, both capable of impressive speeds for their time. The quickest American muscle cars of the early 70s, like the Hemi 'Cuda and Chevelle SS 454, could manage 0-60 times in the 5.5-6.5 second range—fast for the time but pedestrian by today's standards.</p>
<h3 id="heading-1980s">1980s</h3>
<p>During the 1980s, the Audi Sport quattro S1 E2 topped the charts with a 0-62 mph acceleration of 3.1 seconds, followed by the Ford RS200 Evolution at 3.8 seconds, and the legendary Porsche 959 at 3.9 seconds.</p>
<h3 id="heading-1990s">1990s</h3>
<p>The 1990s saw significant improvements with the McLaren F1, Bugatti EB110 S, and Jaguar XJR-15 all achieving 0-62 mph in 3.2 seconds. This decade marked the beginning of the modern hypercar era, with cars designed specifically to push the boundaries of performance.</p>
<h3 id="heading-2000s">2000s</h3>
<p>The 2000s introduced the first production cars capable of breaking the 3-second barrier. The Bugatti Veyron 16.4 led the pack with a 0-62 mph time of 2.5 seconds, followed by the Bugatti Veyron Grand Sport and Ultima GTR720 at 2.7 seconds.</p>
<p>Each decade has seen roughly a half-second improvement in the acceleration benchmark. But the 2020s have brought a quantum leap, with multiple cars now achieving sub-2-second times—a milestone that seemed physically impossible just a few years ago.</p>
<h2 id="heading-airplane-acceleration-when-cars-just-cant-compare">Airplane Acceleration: When Cars Just Can't Compare</h2>
<p>While modern hypercars are impressively quick, they're still no match for aircraft when it comes to raw acceleration:</p>
<p>Calculations suggest a fighter jet like the F-22 Raptor with its 70,000 pounds of thrust propelling a 55,000-pound aircraft would achieve 0-60 mph in approximately 2.15 seconds—comparable to the quickest cars. But that's where the comparison ends. Aircraft continue accelerating long after cars have reached their limits.</p>
<p>Even more impressive is the acceleration experienced on an aircraft carrier: A carrier catapult can accelerate a 35,000 kg aircraft to 60 mph in just 0.796 seconds, subjecting pilots to forces so extreme they need special hand rests to avoid knocking themselves unconscious during launch. Yes, there are special hand rests that the pilots need to grab during a carrier launch, because there is a danger that they knock themselves out if they don't have their hands holding on to something.</p>
<h2 id="heading-the-practical-limits">The Practical Limits</h2>
<p>So what are the practical limits of acceleration in production cars? With conventional technology (wheels on pavement), we're probably approaching them at around 1.5-1.7 seconds for 0-60 mph. The laws of physics impose hard constraints on what's possible without additional tricks like downforce-generating fans.</p>
<p>But the more relevant question might be: what are the practical limits for humans? Most people find acceleration beyond 1g to be uncomfortable and disorienting. The sensation of blood rushing from your head, your organs compressing, and your body being pinned to the seat is not something most of us want to experience on our daily commute.</p>
<p>For now, cars like the McMurtry Spéirling remain fascinating engineering exercises and demonstration platforms for technologies that might eventually filter down to more conventional vehicles. They show us what's possible when talented engineers refuse to accept conventional limitations.</p>
<h3 id="heading-sources-used">Sources used:</h3>
<ol>
<li><p>McMurtry Automotive's official website about the Spéirling Pure electric fan hypercar:</p>
<ul>
<li><a target="_blank" href="https://mcmurtry.com/speirling-pure/">https://mcmurtry.com/speirling-pure/</a></li>
</ul>
</li>
<li><p>Robb Report article about the McMurtry Spéirling becoming the world's fastest production car:</p>
<ul>
<li><a target="_blank" href="https://robbreport.com/motors/cars/mcmurtry-speirling-beat-rimac-neveras-acceleration-records-1234787425/">https://robbreport.com/motors/cars/mcmurtry-speirling-beat-rimac-neveras-acceleration-records-1234787425/</a></li>
</ul>
</li>
<li><p>McMurtry Automotive's article about the Spéirling PURE track car:</p>
<ul>
<li><a target="_blank" href="https://mcmurtry.com/the-mcmurtry-speirling-pure-a-record-breaking-track-only-fan-car-you-can-own/">https://mcmurtry.com/the-mcmurtry-speirling-pure-a-record-breaking-track-only-fan-car-you-can-own/</a></li>
</ul>
</li>
<li><p>McMurtry Spéirling Wikipedia page:</p>
<ul>
<li><a target="_blank" href="https://en.wikipedia.org/wiki/McMurtry_Sp%C3%A9irling">https://en.wikipedia.org/wiki/McMurtry_Sp%C3%A9irling</a></li>
</ul>
</li>
<li><p>Jalopnik article about the McMurtry Spéirling driving upside down:</p>
<ul>
<li><a target="_blank" href="https://www.jalopnik.com/1832203/mcmurty-speirling-drives-upside-down-downforce/">https://www.jalopnik.com/1832203/mcmurty-speirling-drives-upside-down-downforce/</a></li>
</ul>
</li>
<li><p>Carwow review of the McMurtry Spéirling:</p>
<ul>
<li><a target="_blank" href="https://www.carwow.co.uk/news/6544/mcmurtry-speirling-review">https://www.carwow.co.uk/news/6544/mcmurtry-speirling-review</a></li>
</ul>
</li>
<li><p>Top Gear article about the McMurtry Spéirling driving upside down:</p>
<ul>
<li><a target="_blank" href="https://www.topgear.com/car-news/electric/mcmurtry-speirling-first-car-world-drive-upside-down">https://www.topgear.com/car-news/electric/mcmurtry-speirling-first-car-world-drive-upside-down</a></li>
</ul>
</li>
<li><p>McMurtry Automotive's press release about the upside-down driving world first:</p>
<ul>
<li><a target="_blank" href="https://mcmurtry.com/upside-down-driving/">https://mcmurtry.com/upside-down-driving/</a></li>
</ul>
</li>
<li><p>encyCARpedia data about fastest accelerating cars of the 1980s:</p>
<ul>
<li><a target="_blank" href="https://www.encycarpedia.com/top/best-accelerating-1980s-cars">https://www.encycarpedia.com/top/best-accelerating-1980s-cars</a></li>
</ul>
</li>
<li><p>encyCARpedia data about fastest accelerating cars of the 1990s:</p>
<ul>
<li><a target="_blank" href="https://www.encycarpedia.com/top/top-accelerating-1990s-cars">https://www.encycarpedia.com/top/top-accelerating-1990s-cars</a></li>
</ul>
</li>
<li><p>encyCARpedia data about fastest accelerating cars of the 2000s:</p>
<ul>
<li><a target="_blank" href="https://www.encycarpedia.com/top/best-accelerating-2000s-cars">https://www.encycarpedia.com/top/best-accelerating-2000s-cars</a></li>
</ul>
</li>
<li><p>Information about the fastest cars from the 1970s:</p>
<ul>
<li><a target="_blank" href="https://www.encycarpedia.com/us/top/fastest-1970s-cars">https://www.encycarpedia.com/us/top/fastest-1970s-cars</a></li>
</ul>
</li>
<li><p>TopSpeed article about fastest accelerating reliable used cars from the 1980s:</p>
<ul>
<li><a target="_blank" href="https://www.topspeed.com/fastest-accelerating-reliable-used-cars-from-the-1980s/">https://www.topspeed.com/fastest-accelerating-reliable-used-cars-from-the-1980s/</a></li>
</ul>
</li>
<li><p>Aviation Stack Exchange discussion about fastest accelerating aircraft on takeoff:</p>
<ul>
<li><a target="_blank" href="https://aviation.stackexchange.com/questions/35482/what-is-the-fastest-accelerating-aircraft-on-takeoff">https://aviation.stackexchange.com/questions/35482/what-is-the-fastest-accelerating-aircraft-on-takeoff</a></li>
</ul>
</li>
<li><p>Executive Flyers article about how fast planes accelerate on the runway:</p>
<ul>
<li><a target="_blank" href="https://executiveflyers.com/how-fast-do-planes-accelerate-on-the-runway/">https://executiveflyers.com/how-fast-do-planes-accelerate-on-the-runway/</a></li>
</ul>
</li>
<li><p>Aviation Stack Exchange discussion about aircraft carrier catapult acceleration:</p>
<ul>
<li><a target="_blank" href="https://aviation.stackexchange.com/questions/45880/what-is-the-0-60mph-time-of-an-aircraft-carrier-catapult">https://aviation.stackexchange.com/questions/45880/what-is-the-0-60mph-time-of-an-aircraft-carrier-catapult</a></li>
</ul>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[What's new and what's next with AWS Docs]]></title><description><![CDATA[I have been at AWS for over 3 years now, leading the teams that build AWS Documentation and the AWS SDKs/CLI. It's been an incredibly fun ride, and I've learned a lot about the unique challenges of building and managing these types of systems at such...]]></description><link>https://gregwilson.tech/whats-new-aws-docs</link><guid isPermaLink="true">https://gregwilson.tech/whats-new-aws-docs</guid><category><![CDATA[AWS]]></category><category><![CDATA[Amazon Web Services]]></category><category><![CDATA[documentation]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Thu, 13 Feb 2025 18:45:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739472262419/e82b2356-06f2-4ca0-9630-1799cf1f19a0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have been at AWS for over 3 years now, leading the teams that build AWS Documentation and the AWS SDKs/CLI. It's been an incredibly fun ride, and I've learned a lot about the unique challenges of building and managing these types of systems at such an enormous scale.</p>
<p>Since our documentation and SDKs are used by nearly every AWS customer, I believe our team needs to be more transparent about what we're working on and how we're prioritizing work to better incorporate your feedback. To that end, I’m going to start blogging about it.</p>
<p>This first post provides an update on AWS Documentation, sharing details about recent content refactoring, website updates, new content types, and a sneak peek at how we're leveraging AI. I'll follow up soon with a similar update about the SDKs and CLI.</p>
<p>The post is on the AWS Insights blog and can be found <a target="_blank" href="https://aws.amazon.com/blogs/aws-insights/aws-documentation-update-progress-challenges-and-whats-next-for-2025/">here</a>.</p>
<p>I also shared it on:</p>
<ul>
<li><p><a target="_blank" href="https://www.reddit.com/r/aws/comments/1ipdyku/aws_documentation_update_refactored_content/">Reddit</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/posts/gregorywilson_aws-documentation-update-progress-challenges-activity-7295866532503371776-pwO-/">LinkedIn</a></p>
</li>
<li><p><a target="_blank" href="https://bsky.app/profile/gregwilson.tech/post/3li3cf37mtc2w">BlueSky</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Stupid but useful AI tricks: Creating calendar entries for upcoming full moons using Claude 3.5 Sonnet]]></title><description><![CDATA[As mentioned in my previous post, I recently discovered that Claude 3.5 Sonnet can create iCalendar (.ics) files, allowing me to generate calendar entries for almost anything.
As a fun experiment, I tried the following prompt:

My prompt: I live in S...]]></description><link>https://gregwilson.tech/stupid-but-useful-ai-tricks-creating-calendar-entries-for-upcoming-full-moons-using-claude-35-sonnet</link><guid isPermaLink="true">https://gregwilson.tech/stupid-but-useful-ai-tricks-creating-calendar-entries-for-upcoming-full-moons-using-claude-35-sonnet</guid><category><![CDATA[claude.ai]]></category><category><![CDATA[Astronomy]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Sun, 25 Aug 2024 15:23:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724599113112/e901459e-0187-4193-9152-9845bb8b3555.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As mentioned in <a target="_blank" href="https://gregsramblings.com/stupid-but-useful-ai-tricks-creating-calendar-entries-from-an-image-using-anthropic-claude-35">my previous post</a>, I recently discovered that Claude 3.5 Sonnet can create iCalendar (.ics) files, allowing me to generate calendar entries for almost anything.</p>
<p>As a fun experiment, I tried the following prompt:</p>
<blockquote>
<p><strong>My prompt: I live in San Francisco, CA. Create an icalendar file that creates a non-blocking all-day event for every full moon for the next 12 months. For each, indicate the type of full moon in the title of the event.</strong></p>
</blockquote>
<p>Claude responded:</p>
<blockquote>
<p>Certainly! I'll create an iCalendar file for you with non-blocking all-day events for the full moons in the next 12 months, including the type of full moon in the title. This will be useful for tracking lunar cycles in San Francisco.</p>
</blockquote>
<pre><code class="lang-plaintext">BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Full Moon Calendar//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
SUMMARY:Wolf Moon (Full Moon)
DTSTART;VALUE=DATE:20240825
DTEND;VALUE=DATE:20240826
DTSTAMP:20240825T000000Z
UID:20240825-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Snow Moon (Full Moon)
DTSTART;VALUE=DATE:20240924
DTEND;VALUE=DATE:20240925
DTSTAMP:20240924T000000Z
UID:20240924-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hunter's Moon (Full Moon)
DTSTART;VALUE=DATE:20241023
DTEND;VALUE=DATE:20241024
DTSTAMP:20241023T000000Z
UID:20241023-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Beaver Moon (Full Moon)
DTSTART;VALUE=DATE:20241122
DTEND;VALUE=DATE:20241123
DTSTAMP:20241122T000000Z
UID:20241122-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cold Moon (Full Moon)
DTSTART;VALUE=DATE:20241221
DTEND;VALUE=DATE:20241222
DTSTAMP:20241221T000000Z
UID:20241221-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wolf Moon (Full Moon)
DTSTART;VALUE=DATE:20250120
DTEND;VALUE=DATE:20250121
DTSTAMP:20250120T000000Z
UID:20250120-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Snow Moon (Full Moon)
DTSTART;VALUE=DATE:20250218
DTEND;VALUE=DATE:20250219
DTSTAMP:20250218T000000Z
UID:20250218-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Worm Moon (Full Moon)
DTSTART;VALUE=DATE:20250320
DTEND;VALUE=DATE:20250321
DTSTAMP:20250320T000000Z
UID:20250320-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pink Moon (Full Moon)
DTSTART;VALUE=DATE:20250418
DTEND;VALUE=DATE:20250419
DTSTAMP:20250418T000000Z
UID:20250418-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Flower Moon (Full Moon)
DTSTART;VALUE=DATE:20250518
DTEND;VALUE=DATE:20250519
DTSTAMP:20250518T000000Z
UID:20250518-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Strawberry Moon (Full Moon)
DTSTART;VALUE=DATE:20250616
DTEND;VALUE=DATE:20250617
DTSTAMP:20250616T000000Z
UID:20250616-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
BEGIN:VEVENT
SUMMARY:Buck Moon (Full Moon)
DTSTART;VALUE=DATE:20250716
DTEND;VALUE=DATE:20250717
DTSTAMP:20250716T000000Z
UID:20250716-fullmoon@fullmooncalendar.com
TRANSP:TRANSPARENT
END:VEVENT
END:VCALENDAR
</code></pre>
<blockquote>
<p>I've created an iCalendar file with non-blocking all-day events for the full moons over the next 12 months, starting from today (August 25, 2024). Each event includes the type of full moon in its title. Here's a breakdown of the events:</p>
<ol>
<li><p>August 25, 2024: Wolf Moon</p>
</li>
<li><p>September 24, 2024: Snow Moon</p>
</li>
<li><p>October 23, 2024: Hunter's Moon</p>
</li>
<li><p>November 22, 2024: Beaver Moon</p>
</li>
<li><p>December 21, 2024: Cold Moon</p>
</li>
<li><p>January 20, 2025: Wolf Moon</p>
</li>
<li><p>February 18, 2025: Snow Moon</p>
</li>
<li><p>March 20, 2025: Worm Moon</p>
</li>
<li><p>April 18, 2025: Pink Moon</p>
</li>
<li><p>May 18, 2025: Flower Moon</p>
</li>
<li><p>June 16, 2025: Strawberry Moon</p>
</li>
<li><p>July 16, 2025: Buck Moon</p>
</li>
</ol>
<p>You can save this content as a .ics file and import it into your calendar application. The events are set as non-blocking (TRANSP:TRANSPARENT), so they won't interfere with your other scheduled events.</p>
</blockquote>
<p>I saved the file as fullmoon.ics and imported it to my Google calendar (go to settings, import/export).</p>
<p>This will also work with ChatGPT, but you need to explicitly ask it to display the contents of the ics file.</p>
<p>I just tried it with Google Gemini with no luck, but I'm sure it will in the future:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724599938052/bd9024e6-6c6c-404a-ba32-91b843858a3e.png" alt="Screenshot of Google Gemini responding to the prompt in this article. The response is &quot;Sorry, I can't help with that&quot;" class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Stupid but useful AI tricks: Creating calendar entries from an image using Anthropic Claude 3.5]]></title><description><![CDATA[A few days ago, my jazz piano teacher sent me the new fall/winter schedule for my private jazz piano lessons -- 13 different dates -- as a JPG (mine are outlined in green marker):

I was too lazy to go make 13 entries in Google Calendar, so I decided...]]></description><link>https://gregwilson.tech/stupid-but-useful-ai-tricks-creating-calendar-entries-from-an-image-using-anthropic-claude-35</link><guid isPermaLink="true">https://gregwilson.tech/stupid-but-useful-ai-tricks-creating-calendar-entries-from-an-image-using-anthropic-claude-35</guid><category><![CDATA[icalendar]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[calendar]]></category><category><![CDATA[image processing]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Sun, 25 Aug 2024 02:23:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724552182244/c60e8dff-b1c8-4209-b714-8ab4dfe5cac7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few days ago, my jazz piano teacher sent me the new fall/winter schedule for my private jazz piano lessons -- 13 different dates -- as a JPG (mine are outlined in green marker):</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724550508637/b7807f68-6a22-45bc-85f0-78faadcaec63.jpeg" alt class="image--center mx-auto" /></p>
<p>I was too lazy to go make 13 entries in Google Calendar, so I decided to see if Claude could help me out:</p>
<p>I first uploaded the jpg to Claude 3.5 Sonnet)...</p>
<blockquote>
<p><strong>My prompt:</strong> List the dates that are outlined in green</p>
<p>The dates outlined in green in the image are:</p>
<p>Friday, 13-Sep-24 Friday, 27-Sep-24 Friday, 11-Oct-24 Friday, 8-Nov-24 Friday, 6-Dec-24 Friday, 20-Dec-24 Friday, 17-Jan-25 Friday, 31-Jan-25 Friday, 28-Feb-25 Friday, 14-Mar-25 Friday, 11-Apr-25 Friday, 25-Apr-25 Friday, 23-May-25</p>
<p>These dates correspond to the Friday column for students who have their first every-other-week lesson starting the week of Tuesday Sept 10, 2024.</p>
</blockquote>
<p>Cool - that was easy and accurate. Now I need to get it into my calendar, so I asked it to create an <a target="_blank" href="https://en.wikipedia.org/wiki/ICalendar">ics</a> file...</p>
<blockquote>
<p><strong>My prompt:</strong> Create a calendar ics file that creates an appointment on each of those dates at 2pm pacific time. The appointment title will be “Jazz Piano Lesson”</p>
<p>Certainly. I'll create an ICS (iCalendar) file with appointments for "Jazz Piano Lesson" on each of the dates you specified, set for 2:00 PM Pacific Time.</p>
</blockquote>
<pre><code class="lang-plaintext">BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Claude//Jazz Piano Lessons//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20240913T210000Z
DTEND:20240913T220000Z
DTSTAMP:20240824T000000Z
UID:20240913jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20240927T210000Z
DTEND:20240927T220000Z
DTSTAMP:20240824T000000Z
UID:20240927jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20241011T210000Z
DTEND:20241011T220000Z
DTSTAMP:20240824T000000Z
UID:20241011jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20241108T220000Z
DTEND:20241108T230000Z
DTSTAMP:20240824T000000Z
UID:20241108jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20241206T220000Z
DTEND:20241206T230000Z
DTSTAMP:20240824T000000Z
UID:20241206jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20241220T220000Z
DTEND:20241220T230000Z
DTSTAMP:20240824T000000Z
UID:20241220jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20250117T220000Z
DTEND:20250117T230000Z
DTSTAMP:20240824T000000Z
UID:20250117jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20250131T220000Z
DTEND:20250131T230000Z
DTSTAMP:20240824T000000Z
UID:20250131jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20250228T220000Z
DTEND:20250228T230000Z
DTSTAMP:20240824T000000Z
UID:20250228jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20250314T210000Z
DTEND:20250314T220000Z
DTSTAMP:20240824T000000Z
UID:20250314jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20250411T210000Z
DTEND:20250411T220000Z
DTSTAMP:20240824T000000Z
UID:20250411jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20250425T210000Z
DTEND:20250425T220000Z
DTSTAMP:20240824T000000Z
UID:20250425jazzpiano@claude.ai
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jazz Piano Lesson
DTSTART:20250523T210000Z
DTEND:20250523T220000Z
DTSTAMP:20240824T000000Z
UID:20250523jazzpiano@claude.ai
END:VEVENT
END:VCALENDAR
</code></pre>
<blockquote>
<p>I've created an ICS file with the following details:</p>
<ol>
<li><p>Each event is titled "Jazz Piano Lesson"</p>
</li>
<li><p>The events are scheduled for 2:00 PM Pacific Time on each of the specified dates</p>
</li>
<li><p>Each event is set to last for one hour</p>
</li>
<li><p>The time zone is properly adjusted to UTC (Z) in the file</p>
</li>
<li><p>Unique identifiers (UIDs) are provided for each event</p>
</li>
</ol>
<p>You can save this content as a .ics file and import it into most calendar applications. Here are a few notes about the file:</p>
<ol>
<li><p>The times are in UTC (Coordinated Universal Time). 2:00 PM Pacific Time translates to 9:00 PM UTC (21:00) for dates not in Daylight Saving Time, and 10:00 PM UTC (22:00) for dates within Daylight Saving Time. The file accounts for these changes.</p>
</li>
<li><p>The DTSTAMP for all events is set to the date of this conversation (August 24, 2024) as a placeholder. Your calendar application will likely update this when you import the events.</p>
</li>
<li><p>The UID for each event is unique, based on the date of the lesson and a simple string.</p>
</li>
</ol>
<p>To use this file:</p>
<ol>
<li><p>Copy the content of the artifact</p>
</li>
<li><p>Paste it into a new text file</p>
</li>
<li><p>Save the file with a .ics extension (e.g., "jazz_piano_lessons.ics")</p>
</li>
<li><p>Import this file into your preferred calendar application</p>
</li>
</ol>
<p>Most calendar applications should be able to read this file and create the appropriate recurring events. Let me know if you need any modifications or have any questions!</p>
</blockquote>
<p>I went to calendar.google.com, imported the ics, and my appointments were imported perfectly!<br />(To import ics files into Google calendar, go to the calendar settings, then to import/export)</p>
<p>I'm pleasantly surprised at how easy this was!</p>
<p>I tried the exact same thing in ChatGPT -- it identified the dates in the images, but it said it was was unable to create an .ics file. However, it did give me some Python code to create the ics file for me. I came back to that conversation later and said, "Just show me the content of the .ics file" and it did!</p>
]]></content:encoded></item><item><title><![CDATA[How to Use Amazon Bedrock APIs for Anthropic Claude 3.5 Sonnet in Python]]></title><description><![CDATA[The Anthropic APIs and Python SDK are solid, but they do have some limits that might make certain tasks challenging, especially as of this writing. There is an alternative path to using Anthropic's models - Amazon Bedrock. Amazon Bedrock now offers a...]]></description><link>https://gregwilson.tech/how-to-use-aws-bedrock-apis-for-anthropic-claude-35-sonnet-in-python</link><guid isPermaLink="true">https://gregwilson.tech/how-to-use-aws-bedrock-apis-for-anthropic-claude-35-sonnet-in-python</guid><category><![CDATA[claude.ai]]></category><category><![CDATA[Amazon Bedrock]]></category><category><![CDATA[AWS Bedrock]]></category><category><![CDATA[Python]]></category><category><![CDATA[Claude 3.5]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Mon, 19 Aug 2024 05:51:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724046243943/46c0b530-bb19-481f-b073-0937a0d4fa78.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Anthropic APIs and Python SDK are solid, but they do have some limits that might make certain tasks challenging, especially as of this writing. There is an alternative path to using Anthropic's models - <a target="_blank" href="https://aws.amazon.com/bedrock">Amazon Bedrock</a>. Amazon Bedrock now offers access to Anthropic's Claude models, <a target="_blank" href="https://aws.amazon.com/blogs/aws/anthropics-claude-3-5-sonnet-model-now-available-in-amazon-bedrock-the-most-intelligent-claude-model-yet/">including Claude 3.5 Sonnet</a>. If you have the need for different limits or if you are integrating with other AWS services, it might make sense to migrate your existing Python code from using the Anthropic API to the Amazon Bedrock APIs. This guide will step you through migrating your code. It assumes you're already familiar with Anthropic and AWS authentication.</p>
<p>I decided not to delve into the differences in requests-per-minute, tokens-per-minute, tokens-per-day, API cost, etc., as these change frequently and explaining all of this would require significantly more text. Do your research to ensure this is the right path for you. Personally, I encountered the tokens-per-day limit with Anthropic APIs when asking Claude to provide descriptions for thousands of images. Switching to Bedrock allowed me to complete this task in a single run.</p>
<h2 id="heading-1-installing-required-libraries">1. Installing Required Libraries</h2>
<p>First, ensure you have the necessary libraries installed:</p>
<pre><code class="lang-bash">pip install anthropic boto3
</code></pre>
<h2 id="heading-2-importing-libraries-and-setting-up-clients">2. Importing Libraries and Setting Up Clients</h2>
<p>Replace your Anthropic imports with AWS SDK for Python (Boto3):</p>
<pre><code class="lang-python"><span class="hljs-comment"># Old Anthropic import</span>
<span class="hljs-comment"># from anthropic import Anthropic</span>

<span class="hljs-comment"># New Amazon Bedrock import</span>
<span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">import</span> json
</code></pre>
<p>Then, set up the Amazon Bedrock client:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Old Anthropic client setup</span>
<span class="hljs-comment"># anthropic = Anthropic(api_key="your-api-key")</span>

<span class="hljs-comment"># New Amazon Bedrock client setup</span>
bedrock_runtime = boto3.client(
    service_name=<span class="hljs-string">'bedrock-runtime'</span>, <span class="hljs-comment"># see sample code below for how I used Claude 3.5 Sonnet model</span>
    region_name=<span class="hljs-string">'your-aws-region'</span> <span class="hljs-comment"># check docs - not all models are available in all regions</span>
)
</code></pre>
<h2 id="heading-3-constructing-the-request">3. Constructing the Request</h2>
<p>The request structure for Amazon Bedrock differs from Anthropic's API. Here's how to adapt your code:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Old Anthropic request</span>
<span class="hljs-comment"># response = anthropic.messages.create(</span>
<span class="hljs-comment">#     model="claude-3-sonnet-20240229",</span>
<span class="hljs-comment">#     max_tokens=300,</span>
<span class="hljs-comment">#     temperature=0.7,</span>
<span class="hljs-comment">#     messages=[</span>
<span class="hljs-comment">#         {"role": "user", "content": "Your message here"}</span>
<span class="hljs-comment">#     ]</span>
<span class="hljs-comment"># )</span>

<span class="hljs-comment"># New Amazon Bedrock request</span>
request_body = json.dumps({
    <span class="hljs-string">"anthropic_version"</span>: <span class="hljs-string">"bedrock-2023-05-31"</span>,
    <span class="hljs-string">"max_tokens"</span>: <span class="hljs-number">300</span>,
    <span class="hljs-string">"temperature"</span>: <span class="hljs-number">0.7</span>,
    <span class="hljs-string">"messages"</span>: [
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>, <span class="hljs-string">"content"</span>: <span class="hljs-string">"Your message here"</span>}
    ]
})
</code></pre>
<h2 id="heading-4-sending-the-request-and-handling-the-response">4. Sending the Request and Handling the Response</h2>
<p>Now, let's send the request and process the response:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Send request to Amazon Bedrock</span>
response = bedrock_runtime.invoke_model(
    modelId=<span class="hljs-string">'anthropic.claude-3-5-sonnet-20240620-v1:0'</span>,
    body=request_body
)

<span class="hljs-comment"># Parse the response</span>
response_body = json.loads(response[<span class="hljs-string">'body'</span>].read())
generated_text = response_body[<span class="hljs-string">'content'</span>][<span class="hljs-number">0</span>][<span class="hljs-string">'text'</span>]

print(generated_text)
</code></pre>
<h2 id="heading-5-error-handling">5. Error Handling</h2>
<p>Adapt your error handling to work with Amazon Bedrock:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> botocore.exceptions <span class="hljs-keyword">import</span> ClientError

<span class="hljs-keyword">try</span>:
    response = bedrock_runtime.invoke_model(
        modelId=<span class="hljs-string">'anthropic.claude-3-5-sonnet-20240620-v1:0'</span>,
        body=request_body
    )
    <span class="hljs-comment"># Process response as shown above</span>
<span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
    print(<span class="hljs-string">f"An error occurred: <span class="hljs-subst">{e.response[<span class="hljs-string">'Error'</span>][<span class="hljs-string">'Message'</span>]}</span>"</span>)
</code></pre>
<h2 id="heading-6-key-differences-to-note">6. Key Differences to Note</h2>
<ol>
<li><p><strong>Authentication</strong>: Amazon Bedrock uses AWS credentials instead of an API key.</p>
</li>
<li><p><strong>Request Structure</strong>: Amazon Bedrock requires a JSON string for the request body.</p>
</li>
<li><p><strong>Model Specification</strong>: The model is specified as a <code>modelId</code> parameter in the <code>invoke_model()</code> call.</p>
</li>
<li><p><strong>Response Parsing</strong>: The response from Amazon Bedrock needs to be parsed from a JSON structure.</p>
</li>
<li><p><strong>Error Handling</strong>: Use boto3's <code>ClientError</code> for catching AWS-specific exceptions.</p>
</li>
</ol>
<h2 id="heading-7-important-considerations-for-amazon-bedrock-setup">7. Important Considerations for Amazon Bedrock Setup</h2>
<ul>
<li><p>Ensure your AWS credentials have the necessary permissions to access Bedrock services.</p>
</li>
<li><p>Check that the Claude 3.5 Sonnet model (anthropic.claude-3-5-sonnet-20240620-v1:0) is available in your chosen AWS region (it's only in a few as of this writing).</p>
</li>
<li><p>Verify that your AWS account has been granted access to the Claude 3.5 Sonnet model. If not, you can simply <a target="_blank" href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html">request access</a> through the console and have access in a few minutes.</p>
</li>
<li><p>The <code>anthropic_version</code> parameter in the request body may need updating based on the latest Bedrock API version.</p>
</li>
</ul>
<h2 id="heading-8-making-it-all-work">8. Making it all work</h2>
<h3 id="heading-a-simple-anthropic-api-example">A simple Anthropic API example:</h3>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> anthropic <span class="hljs-keyword">import</span> Anthropic
<span class="hljs-keyword">import</span> os

<span class="hljs-comment"># Set up the Anthropic client</span>
anthropic = Anthropic(api_key=os.environ.get(<span class="hljs-string">"ANTHROPIC_API_KEY"</span>))

<span class="hljs-comment"># Define the message</span>
message = <span class="hljs-string">"What's the difference between Amazon Aurora and RDS?"</span>

<span class="hljs-comment"># Send the request to Claude 3.5 Sonnet using the Messages API</span>
response = anthropic.messages.create(
    model=<span class="hljs-string">"claude-3-sonnet-20240229"</span>,
    max_tokens=<span class="hljs-number">300</span>,
    temperature=<span class="hljs-number">0.7</span>,
    messages=[
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>, <span class="hljs-string">"content"</span>: message}
    ]
)

<span class="hljs-comment"># Print the response</span>
print(response.content[<span class="hljs-number">0</span>].text)
</code></pre>
<h3 id="heading-converted-to-run-using-amazon-bedrock">Converted to run using Amazon Bedrock</h3>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> botocore.exceptions <span class="hljs-keyword">import</span> ClientError

<span class="hljs-comment"># Set up the Amazon Bedrock client</span>
bedrock_runtime = boto3.client(
    service_name=<span class="hljs-string">'bedrock-runtime'</span>,
    region_name=<span class="hljs-string">'us-east-1'</span>,  <span class="hljs-comment"># Replace with your AWS region</span>
    aws_access_key_id=os.environ.get(<span class="hljs-string">"AWS_ACCESS_KEY_ID"</span>),
    aws_secret_access_key=os.environ.get(<span class="hljs-string">"AWS_SECRET_ACCESS_KEY"</span>)
)

<span class="hljs-comment"># Define the message</span>
message = <span class="hljs-string">"What's the difference between Amazon Aurora and RDS?"</span>

<span class="hljs-comment"># Prepare the request body</span>
request_body = json.dumps({
    <span class="hljs-string">"anthropic_version"</span>: <span class="hljs-string">"bedrock-2023-05-31"</span>,
    <span class="hljs-string">"max_tokens"</span>: <span class="hljs-number">300</span>,
    <span class="hljs-string">"temperature"</span>: <span class="hljs-number">0.7</span>,
    <span class="hljs-string">"messages"</span>: [
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>, <span class="hljs-string">"content"</span>: message}
    ]
})

<span class="hljs-comment"># Specify the Claude 3.5 Sonnet model ID</span>
model_id = <span class="hljs-string">'anthropic.claude-3-5-sonnet-20240620-v1:0'</span>

<span class="hljs-keyword">try</span>:
    print(<span class="hljs-string">f"Invoking model: <span class="hljs-subst">{model_id}</span>"</span>)
    response = bedrock_runtime.invoke_model(
        modelId=model_id,
        body=request_body
    )

    <span class="hljs-comment"># Parse and print the response</span>
    response_body = json.loads(response[<span class="hljs-string">'body'</span>].read())
    print(<span class="hljs-string">"Response:"</span>, response_body[<span class="hljs-string">'content'</span>][<span class="hljs-number">0</span>][<span class="hljs-string">'text'</span>])
<span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
    error_code = e.response[<span class="hljs-string">'Error'</span>][<span class="hljs-string">'Code'</span>]
    error_message = e.response[<span class="hljs-string">'Error'</span>][<span class="hljs-string">'Message'</span>]
    print(<span class="hljs-string">f"Error: <span class="hljs-subst">{error_code}</span> - <span class="hljs-subst">{error_message}</span>"</span>)
<span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
    print(<span class="hljs-string">f"Unexpected error: <span class="hljs-subst">{str(e)}</span>"</span>)

<span class="hljs-comment"># Print debugging information</span>
print(<span class="hljs-string">"\nDebugging Information:"</span>)
print(<span class="hljs-string">f"AWS Region: <span class="hljs-subst">{bedrock_runtime.meta.region_name}</span>"</span>)
print(<span class="hljs-string">f"Model used: <span class="hljs-subst">{model_id}</span>"</span>)
print(<span class="hljs-string">"Please ensure you have the correct permissions and that this model is available in your region."</span>)
</code></pre>
<h2 id="heading-alternative-leveraging-claude-for-code-conversion">Alternative: Leveraging Claude for Code Conversion</h2>
<p>While manually converting your code from the Anthropic API to Amazon Bedrock is a valuable exercise in understanding the differences between the two services, there's an alternative approach that can save time and potentially catch nuances you might miss: using Claude 3.5 Sonnet itself to assist with the conversion process. I was able to make this work, but I did have to do a bit of 'back and forth' with Claude 3.5 Sonnet to get working code. Even with the minor challenges, it still saved me time!</p>
<p>Here's how you can leverage Claude's capabilities:</p>
<ol>
<li><p><strong>Prepare Your Existing Code</strong>: Gather the Anthropic API code you want to convert. Ensure it's well-commented and includes any relevant context.</p>
</li>
<li><p><strong>Craft a Clear Request</strong>: Ask Claude to convert your code, providing specific instructions. For example: "Please convert the following Python code that uses the Anthropic API with Claude 3.5 Sonnet to use Amazon Bedrock instead. Maintain the same functionality and include error handling. Here's the code: [paste your code here]"</p>
</li>
<li><p><strong>Review and Refine</strong>: Examine the converted code provided by Claude. It may catch subtleties in the API differences that aren't immediately obvious. If needed, ask for clarifications or modifications.</p>
</li>
<li><p><strong>Iterate if Necessary</strong>: If the converted code isn't quite right, provide feedback to Claude and ask for adjustments. For instance: "The conversion looks good, but could you add more robust error handling for AWS-specific exceptions?".</p>
</li>
<li><p><strong>Test Thoroughly</strong>: Once satisfied with the conversion, test the new Amazon Bedrock code thoroughly to ensure it maintains the functionality of your original implementation.</p>
</li>
</ol>
<p>This approach can be particularly beneficial when:</p>
<ul>
<li><p>You're converting a large amount of code</p>
</li>
<li><p>You're unfamiliar with the intricacies of the Amazon Bedrock API</p>
</li>
<li><p>You want to ensure you're following best practices for Amazon Bedrock integration</p>
</li>
</ul>
<p>If there is one thing I've learned from using Claude 3.5 and other LLMs to write code -- <strong>Always</strong> review and test the converted code in your own environment.</p>
<p><strong>UPDATE on August 19th, 2024</strong> - A reader pointed out to me that the Anthropic API offers a path to Bedrock as well! This might be a simpler path depending on your needs. Check it out - <a target="_blank" href="https://docs.anthropic.com/en/api/claude-on-amazon-bedrock">https://docs.anthropic.com/en/api/claude-on-amazon-bedrock</a></p>
]]></content:encoded></item><item><title><![CDATA[Fun with Anthropic Claude and image file geo-location]]></title><description><![CDATA[I've been a photographer for decades, and as a result, I have over 100,000 image files in various formats, many of which include location information. I thought it would be fun to experiment with this location data. The task turned out to be a bit to...]]></description><link>https://gregwilson.tech/fun-with-anthropic-claude-and-image-file-locations</link><guid isPermaLink="true">https://gregwilson.tech/fun-with-anthropic-claude-and-image-file-locations</guid><category><![CDATA[claude.ai]]></category><category><![CDATA[Claude 3.5]]></category><category><![CDATA[Photography]]></category><category><![CDATA[geolocation]]></category><category><![CDATA[exiftool]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Fri, 16 Aug 2024 06:16:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723788924600/700365b3-09c1-4135-ae12-02676c8e3b46.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've been a photographer for decades, and as a result, I have over 100,000 image files in various formats, many of which include location information. I thought it would be fun to experiment with this location data. The task turned out to be a bit tougher than I expected, but it was a great chance to see how well <a target="_blank" href="https://claude.ai/">Anthropic Claude</a> could help debug. I ended up with 11 versions of the code as I fixed errors and expanded the scope. To keep this post brief, I'll only share the final code and skip the interim versions. However, I will also include all of my exchanges with Claude so you can see how this process works in a real-world scenario. I never touched the code that Claude generated, otherwise, I would have run this in an IDE with the appropriate integrations with Claude (see <a target="_blank" href="https://claude3.us/claude-3-for-coding/">this article</a> for some examples).</p>
<p>I started simple to make sure I could even extract location data from my test images and asked Claude (3.5 Sonnet) to write some code for me:</p>
<p>My Prompt:</p>
<blockquote>
<p>Write a Python script that takes a list of image files as parameters. For each image file, list the filename, date created, and location information. Image formats will include jpg, nef, and heic (Apple iphone images).</p>
</blockquote>
<p>As requested, Claude created some Python code for me and asked me to install the following Pillow and piexif libraries:</p>
<p><code>pip install Pillow piexif</code></p>
<p>I tried the generated code, got an <strong>error</strong>, and told Claude about it:</p>
<blockquote>
<p>I get the following error when testing with a jpg:<br />Traceback (most recent call last):   File "/Users/gregwilson/Downloads/imageplay/<a target="_blank" href="http://image-metadata-script.py">image-metadata-script.py</a>", line 87, in &lt;module&gt;     main(sys.argv[1:])   File "/Users/gregwilson/Downloads/imageplay/<a target="_blank" href="http://image-metadata-script.py">image-metadata-script.py</a>", line 74, in main     info = process_image(image_file)            ^^^^^^^^^^^^^^^^^^^^^^^^^   File "/Users/gregwilson/Downloads/imageplay/<a target="_blank" href="http://image-metadata-script.py">image-metadata-script.py</a>", line 63, in process_image     location = get_gps_info(exif_data)                ^^^^^^^^^^^^^^^^^^^^^^^   File "/Users/gregwilson/Downloads/imageplay/<a target="_blank" href="http://image-metadata-script.py">image-metadata-script.py</a>", line 44, in get_gps_info     return f"{lat:.6f}, {lon:.6f}"            ^^^^^^^^^^^^^^^^^^^^^^^ TypeError: unsupported format string passed to Fraction.format</p>
</blockquote>
<p><strong>Claude responded:</strong></p>
<p><code>I apologize for the oversight. The error you're encountering is due to the GPS coordinates being stored as Fraction objects in some EXIF data, which can't be formatted directly with the f-string. Let's modify the get_gps_info function to handle this case.</code></p>
<p><code>These changes should resolve the error you were seeing. The script now handles GPS coordinates stored as tuples (degrees, minutes, seconds) or as Fraction objects, which are common formats for GPS data in EXIF.</code></p>
<p><code>This updated version should work with JPG files that store GPS coordinates in various formats. It still maintains compatibility with NEF a</code>nd HEIC files as well.</p>
<p>Everything worked with jpg files, but my heic files didn't, so I told Claude about it:</p>
<blockquote>
<p>Date created and location are unknown with all heic files</p>
</blockquote>
<p><strong>Claude responded:</strong></p>
<p><code>I apologize for the oversight with HEIC files. You're right that the current implementation isn't correctly extracting the date and location information from HEIC files. Let's modify the script to properly handle HEIC metadata. We'll use the pyheif library in addition to piexif to better process HEIC files.</code></p>
<p>Claude asked me to install pyheif - <strong>pip install pyheif</strong> - but I got errors trying to install it, so I told Claude about it:</p>
<blockquote>
<p>pyheif won't install</p>
</blockquote>
<p><strong>Claude responded:</strong></p>
<p><code>I apologize for the difficulty you're experiencing with installing pyheif. You're right that it can sometimes be challenging to install due to its dependencies. Let's try an alternative approach using pillow-heif, which is generally easier to install and use. Here's an updated version of the script that uses pillow-heif instead of pyheif:</code></p>
<p>I installed pillow-heif as instructed and tried the newly generated code, but got another <strong>error</strong>, so I told Claude about it:</p>
<blockquote>
<p>I'm getting the following error:<br />Traceback (most recent call last):   File "/Users/gregwilson/Downloads/imageplay/<a target="_blank" href="http://image-metadata-script.py">image-metadata-script.py</a>", line 82, in &lt;module&gt;     main(sys.argv[1:])   File "/Users/gregwilson/Downloads/imageplay/<a target="_blank" href="http://image-metadata-script.py">image-metadata-script.py</a>", line 69, in main     info = process_image(image_file)            ^^^^^^^^^^^^^^^^^^^^^^^^^   File "/Users/gregwilson/Downloads/imageplay/<a target="_blank" href="http://image-metadata-script.py">image-metadata-script.py</a>", line 58, in process_image     location = get_gps_info(exif_data)                ^^^^^^^^^^^^^^^^^^^^^^^   File "/Users/gregwilson/Downloads/imageplay/<a target="_blank" href="http://image-metadata-script.py">image-metadata-script.py</a>", line 38, in get_gps_info     lat = gps_info.get(2)           ^^^^^^^^^^^^ AttributeError: 'int' object has no attribute 'get'</p>
</blockquote>
<p><strong>Claude responded:</strong></p>
<p><code>I apologize for the oversight. The error you're encountering suggests that the GPS information is not being stored as expected in the EXIF data. Let's modify the get_gps_info function to handle different possible structures of GPS data.</code></p>
<p>This worked for jpg files, so now we're getting somewhere! However, for heic files, it didn't work, so I told Claude about it:</p>
<blockquote>
<p>Location is unknown in all heic files</p>
</blockquote>
<p><strong>Claude responded:</strong></p>
<p><code>I apologize for the continued difficulty with HEIC files. It seems that the GPS data in HEIC files might be stored differently or not easily accessible through the current method. Let's try a different approach using the exif library, which is specifically designed to handle EXIF data and should work better with HEIC files.</code></p>
<p>I tried the new code, got a new error, and told Claude about it:</p>
<blockquote>
<p>Getting following error on heic files:<br />Error reading EXIF data for IMG_1837.heic:</p>
<p>+--------+------------+-------+-------+------------------------+ | Offset | Access     | Value | Bytes | Format                 | +--------+------------+-------+-------+------------------------+ |        |            |       |       | TiffHeader (Structure) | | 0      | byte_order | 11565 | 2d 2d | tiff_byte_order        | +--------+------------+-------+-------+------------------------+</p>
</blockquote>
<p><strong>Claude responded:</strong></p>
<p><code>I apologize for the continued issues. It seems that the exif library is having trouble reading the HEIC files directly. Let's try a different approach using pillow_heif to open the HEIC files and then extract the EXIF data.</code></p>
<p>I downloaded the new code, and <strong>it worked</strong>! Below is the output:</p>
<pre><code class="lang-plaintext">python3 image-metadata-script.py *heic *jpg      
Filename: IMG_1837.heic
Date Created: 2023-12-26 21:27:59
Location: 37.659542, -122.294072
----------------------------------------
Filename: IMG_3724.heic
Date Created: 2023-08-08 19:59:54
Location: 37.659483, -122.294006
----------------------------------------
Filename: IMG_3725.heic
Date Created: 2023-08-08 20:01:46
Location: 37.659525, -122.293936
----------------------------------------
Filename: DSC_7148.jpg
Date Created: 2015-08-27 09:36:46
Location: 48.862175, 2.323205
----------------------------------------
Filename: PANO0001 Panorama-106.jpg
Date Created: 2023-09-07 18:17:21
Location: 37.659570, -122.293875
----------------------------------------
Filename: PANO0001 Panorama.jpg
Date Created: 2023-09-07 18:15:53
Location: 37.6759570, -122.293872
----------------------------------------
</code></pre>
<p>I obviously had to wrestle Claude a bit to get this working, but it's impressive nonetheless.</p>
<p>I decided to take it to the next level and have Claude add some code to convert the lat/long to city, state, country. I asked Claude:</p>
<blockquote>
<p>That works. Now add the city, state, country of each image location</p>
</blockquote>
<p><strong>Claude responded:</strong></p>
<p><code>Great to hear that the script is now working for your HEIC files! Let's enhance the script to add city, state, and country information for each image location. We'll use the geopy library to perform reverse geocoding based on the GPS coordinates.</code></p>
<p>I installed the require libraries, downloaded the new code, tried it, got an <strong>error</strong>, and told Claude about it:</p>
<blockquote>
<p>Getting error:<br />Error getting location details: HTTPSConnectionPool(host='<a target="_blank" href="http://nominatim.openstreetmap.org">nominatim.openstreetmap.org</a>', port=443): Max retries exceeded with url: /reverse?lat=37.759541666666664&amp;lon=-122.39407222222223&amp;format=json&amp;addressdetails=1 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))</p>
</blockquote>
<p>Claude responded:</p>
<p><code>I apologize for the SSL certificate verification error you're encountering. This issue is likely related to your Python installation not having the proper SSL certificates. Let's modify our script to work around this issue while still maintaining security. We'll add an option to disable SSL verification, but we'll make sure to warn the user about the potential security implications.</code></p>
<p>Claude also provided instructions for how to get my SSL cert installed and working, but I didn't want to do that for this little exercise, so I downloaded the new code, tried it, and it worked! Below is the new output:</p>
<pre><code class="lang-plaintext">----------------------------------------
Filename: DSC_7148.jpg
Date Created: 2015-08-27 09:36:46
Location: 48.862175, 2.323205
Location Details: Paris, Île-de-France, France
----------------------------------------
</code></pre>
<p>I decided to try one more thing... I asked Claude:</p>
<blockquote>
<p>Works perfectly. Now determine the distance between all images and determine the two images with the most distance between them.</p>
</blockquote>
<p><strong>Claude responded:</strong></p>
<p><code>Great to hear it's working perfectly! Let's enhance the script to calculate the distances between all images and find the two images that are farthest apart. We'll use the geopy.distance module to calculate the distances between coordinates.</code></p>
<p>I downloaded the new code and it worked perfectly. I gave it a handful of images and it output:</p>
<pre><code class="lang-plaintext">The two images farthest apart are:
1. DSC_7148.jpg (Paris, Île-de-France, France)
2. IMG_3724.heic (San Francisco, California, United States)
Distance between them: 8974.51 km
</code></pre>
<h2 id="heading-the-final-code">The final code:</h2>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> sys
<span class="hljs-keyword">from</span> datetime <span class="hljs-keyword">import</span> datetime
<span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-keyword">from</span> pillow_heif <span class="hljs-keyword">import</span> register_heif_opener, HeifImagePlugin
<span class="hljs-keyword">from</span> exif <span class="hljs-keyword">import</span> Image <span class="hljs-keyword">as</span> ExifImage
<span class="hljs-keyword">import</span> io
<span class="hljs-keyword">from</span> geopy.geocoders <span class="hljs-keyword">import</span> Nominatim
<span class="hljs-keyword">from</span> geopy.distance <span class="hljs-keyword">import</span> geodesic
<span class="hljs-keyword">from</span> geopy.exc <span class="hljs-keyword">import</span> GeocoderTimedOut, GeocoderUnavailable
<span class="hljs-keyword">import</span> time
<span class="hljs-keyword">import</span> ssl
<span class="hljs-keyword">import</span> certifi
<span class="hljs-keyword">import</span> argparse
<span class="hljs-keyword">from</span> itertools <span class="hljs-keyword">import</span> combinations

<span class="hljs-comment"># Register HEIF opener with Pillow</span>
register_heif_opener()

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_ssl_context</span>(<span class="hljs-params">verify_ssl</span>):</span>
    <span class="hljs-keyword">if</span> verify_ssl:
        <span class="hljs-keyword">return</span> ssl.create_default_context(cafile=certifi.where())
    <span class="hljs-keyword">else</span>:
        ctx = ssl.create_default_context()
        ctx.check_hostname = <span class="hljs-literal">False</span>
        ctx.verify_mode = ssl.CERT_NONE
        <span class="hljs-keyword">return</span> ctx

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">initialize_geolocator</span>(<span class="hljs-params">verify_ssl</span>):</span>
    ssl_context = create_ssl_context(verify_ssl)
    <span class="hljs-keyword">return</span> Nominatim(user_agent=<span class="hljs-string">"image_metadata_script"</span>, scheme=<span class="hljs-string">'https'</span>, ssl_context=ssl_context)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_exif_data</span>(<span class="hljs-params">image_path</span>):</span>
    <span class="hljs-keyword">try</span>:
        <span class="hljs-keyword">if</span> image_path.lower().endswith(<span class="hljs-string">'.heic'</span>):
            <span class="hljs-keyword">with</span> Image.open(image_path) <span class="hljs-keyword">as</span> img:
                exif_data = img.getexif()
                exif_bytes = io.BytesIO()
                img.save(exif_bytes, format=<span class="hljs-string">'JPEG'</span>, exif=exif_data)
                exif_bytes.seek(<span class="hljs-number">0</span>)
                <span class="hljs-keyword">return</span> ExifImage(exif_bytes)
        <span class="hljs-keyword">else</span>:
            <span class="hljs-keyword">with</span> open(image_path, <span class="hljs-string">'rb'</span>) <span class="hljs-keyword">as</span> image_file:
                <span class="hljs-keyword">return</span> ExifImage(image_file)
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        print(<span class="hljs-string">f"Error reading EXIF data for <span class="hljs-subst">{image_path}</span>: <span class="hljs-subst">{str(e)}</span>"</span>)
    <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_creation_date</span>(<span class="hljs-params">exif_data</span>):</span>
    <span class="hljs-keyword">try</span>:
        <span class="hljs-keyword">if</span> hasattr(exif_data, <span class="hljs-string">'datetime_original'</span>):
            <span class="hljs-keyword">return</span> datetime.strptime(str(exif_data.datetime_original), <span class="hljs-string">'%Y:%m:%d %H:%M:%S'</span>)
        <span class="hljs-keyword">elif</span> hasattr(exif_data, <span class="hljs-string">'datetime'</span>):
            <span class="hljs-keyword">return</span> datetime.strptime(str(exif_data.datetime), <span class="hljs-string">'%Y:%m:%d %H:%M:%S'</span>)
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        print(<span class="hljs-string">f"Error parsing creation date: <span class="hljs-subst">{str(e)}</span>"</span>)
    <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">convert_to_degrees</span>(<span class="hljs-params">dms, ref</span>):</span>
    degrees = dms[<span class="hljs-number">0</span>]
    minutes = dms[<span class="hljs-number">1</span>] / <span class="hljs-number">60.0</span>
    seconds = dms[<span class="hljs-number">2</span>] / <span class="hljs-number">3600.0</span>

    <span class="hljs-keyword">if</span> ref <span class="hljs-keyword">in</span> [<span class="hljs-string">'S'</span>, <span class="hljs-string">'W'</span>]:
        <span class="hljs-keyword">return</span> -(degrees + minutes + seconds)
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> degrees + minutes + seconds

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_gps_info</span>(<span class="hljs-params">exif_data</span>):</span>
    <span class="hljs-keyword">try</span>:
        <span class="hljs-keyword">if</span> hasattr(exif_data, <span class="hljs-string">'gps_latitude'</span>) <span class="hljs-keyword">and</span> hasattr(exif_data, <span class="hljs-string">'gps_longitude'</span>):
            lat = convert_to_degrees(exif_data.gps_latitude, exif_data.gps_latitude_ref)
            lon = convert_to_degrees(exif_data.gps_longitude, exif_data.gps_longitude_ref)
            <span class="hljs-keyword">return</span> lat, lon
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        print(<span class="hljs-string">f"Error processing GPS info: <span class="hljs-subst">{str(e)}</span>"</span>)
    <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_location_details</span>(<span class="hljs-params">geolocator, lat, lon</span>):</span>
    max_retries = <span class="hljs-number">3</span>
    <span class="hljs-keyword">for</span> attempt <span class="hljs-keyword">in</span> range(max_retries):
        <span class="hljs-keyword">try</span>:
            location = geolocator.reverse(<span class="hljs-string">f"<span class="hljs-subst">{lat}</span>, <span class="hljs-subst">{lon}</span>"</span>)
            <span class="hljs-keyword">if</span> location:
                address = location.raw[<span class="hljs-string">'address'</span>]
                city = address.get(<span class="hljs-string">'city'</span>) <span class="hljs-keyword">or</span> address.get(<span class="hljs-string">'town'</span>) <span class="hljs-keyword">or</span> address.get(<span class="hljs-string">'village'</span>) <span class="hljs-keyword">or</span> <span class="hljs-string">'Unknown'</span>
                state = address.get(<span class="hljs-string">'state'</span>) <span class="hljs-keyword">or</span> <span class="hljs-string">'Unknown'</span>
                country = address.get(<span class="hljs-string">'country'</span>) <span class="hljs-keyword">or</span> <span class="hljs-string">'Unknown'</span>
                <span class="hljs-keyword">return</span> <span class="hljs-string">f"<span class="hljs-subst">{city}</span>, <span class="hljs-subst">{state}</span>, <span class="hljs-subst">{country}</span>"</span>
            <span class="hljs-keyword">return</span> <span class="hljs-string">"Unknown location"</span>
        <span class="hljs-keyword">except</span> (GeocoderTimedOut, GeocoderUnavailable) <span class="hljs-keyword">as</span> e:
            <span class="hljs-keyword">if</span> attempt &lt; max_retries - <span class="hljs-number">1</span>:
                time.sleep(<span class="hljs-number">1</span>)
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">f"Error getting location details: <span class="hljs-subst">{str(e)}</span>"</span>)
                <span class="hljs-keyword">return</span> <span class="hljs-string">"Unknown location"</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">process_image</span>(<span class="hljs-params">image_path, geolocator</span>):</span>
    filename = os.path.basename(image_path)
    exif_data = get_exif_data(image_path)

    creation_date = get_creation_date(exif_data) <span class="hljs-keyword">if</span> exif_data <span class="hljs-keyword">else</span> <span class="hljs-literal">None</span>
    gps_info = get_gps_info(exif_data) <span class="hljs-keyword">if</span> exif_data <span class="hljs-keyword">else</span> <span class="hljs-literal">None</span>

    location = <span class="hljs-string">"Unknown"</span>
    location_details = <span class="hljs-string">"Unknown location"</span>
    <span class="hljs-keyword">if</span> gps_info:
        lat, lon = gps_info
        location = <span class="hljs-string">f"<span class="hljs-subst">{lat:<span class="hljs-number">.6</span>f}</span>, <span class="hljs-subst">{lon:<span class="hljs-number">.6</span>f}</span>"</span>
        location_details = get_location_details(geolocator, lat, lon)

    <span class="hljs-keyword">return</span> {
        <span class="hljs-string">"filename"</span>: filename,
        <span class="hljs-string">"creation_date"</span>: creation_date.strftime(<span class="hljs-string">'%Y-%m-%d %H:%M:%S'</span>) <span class="hljs-keyword">if</span> creation_date <span class="hljs-keyword">else</span> <span class="hljs-string">"Unknown"</span>,
        <span class="hljs-string">"location"</span>: location,
        <span class="hljs-string">"location_details"</span>: location_details,
        <span class="hljs-string">"gps_coords"</span>: gps_info
    }

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">calculate_distances</span>(<span class="hljs-params">image_infos</span>):</span>
    distances = []
    <span class="hljs-keyword">for</span> (img1, info1), (img2, info2) <span class="hljs-keyword">in</span> combinations(image_infos.items(), <span class="hljs-number">2</span>):
        <span class="hljs-keyword">if</span> info1[<span class="hljs-string">'gps_coords'</span>] <span class="hljs-keyword">and</span> info2[<span class="hljs-string">'gps_coords'</span>]:
            distance = geodesic(info1[<span class="hljs-string">'gps_coords'</span>], info2[<span class="hljs-string">'gps_coords'</span>]).miles
            distances.append((img1, img2, distance))
    <span class="hljs-keyword">return</span> distances

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">find_farthest_images</span>(<span class="hljs-params">distances</span>):</span>
    <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> distances:
        <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>, <span class="hljs-literal">None</span>, <span class="hljs-number">0</span>
    <span class="hljs-keyword">return</span> max(distances, key=<span class="hljs-keyword">lambda</span> x: x[<span class="hljs-number">2</span>])

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>(<span class="hljs-params">image_files, verify_ssl</span>):</span>
    geolocator = initialize_geolocator(verify_ssl)
    image_infos = {}

    <span class="hljs-keyword">for</span> image_file <span class="hljs-keyword">in</span> image_files:
        <span class="hljs-keyword">if</span> os.path.exists(image_file):
            info = process_image(image_file, geolocator)
            image_infos[image_file] = info
            print(<span class="hljs-string">f"Filename: <span class="hljs-subst">{info[<span class="hljs-string">'filename'</span>]}</span>"</span>)
            print(<span class="hljs-string">f"Date Created: <span class="hljs-subst">{info[<span class="hljs-string">'creation_date'</span>]}</span>"</span>)
            print(<span class="hljs-string">f"Location: <span class="hljs-subst">{info[<span class="hljs-string">'location'</span>]}</span>"</span>)
            print(<span class="hljs-string">f"Location Details: <span class="hljs-subst">{info[<span class="hljs-string">'location_details'</span>]}</span>"</span>)
            print(<span class="hljs-string">"-"</span> * <span class="hljs-number">40</span>)
        <span class="hljs-keyword">else</span>:
            print(<span class="hljs-string">f"File not found: <span class="hljs-subst">{image_file}</span>"</span>)
            print(<span class="hljs-string">"-"</span> * <span class="hljs-number">40</span>)

    distances = calculate_distances(image_infos)
    farthest_img1, farthest_img2, max_distance = find_farthest_images(distances)

    <span class="hljs-keyword">if</span> farthest_img1 <span class="hljs-keyword">and</span> farthest_img2:
        print(<span class="hljs-string">f"\nThe two images farthest apart are:"</span>)
        print(<span class="hljs-string">f"1. <span class="hljs-subst">{image_infos[farthest_img1][<span class="hljs-string">'filename'</span>]}</span> (<span class="hljs-subst">{image_infos[farthest_img1][<span class="hljs-string">'location_details'</span>]}</span>)"</span>)
        print(<span class="hljs-string">f"2. <span class="hljs-subst">{image_infos[farthest_img2][<span class="hljs-string">'filename'</span>]}</span> (<span class="hljs-subst">{image_infos[farthest_img2][<span class="hljs-string">'location_details'</span>]}</span>)"</span>)
        print(<span class="hljs-string">f"Distance between them: <span class="hljs-subst">{max_distance:<span class="hljs-number">.2</span>f}</span> miles"</span>)
    <span class="hljs-keyword">else</span>:
        print(<span class="hljs-string">"\nUnable to calculate distances. Ensure at least two images have valid GPS coordinates."</span>)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    parser = argparse.ArgumentParser(description=<span class="hljs-string">"Process image metadata, location information, and calculate distances in miles."</span>)
    parser.add_argument(<span class="hljs-string">"image_files"</span>, nargs=<span class="hljs-string">"+"</span>, help=<span class="hljs-string">"Image files to process"</span>)
    parser.add_argument(<span class="hljs-string">"--no-verify-ssl"</span>, action=<span class="hljs-string">"store_true"</span>, help=<span class="hljs-string">"Disable SSL certificate verification (use with caution)"</span>)
    args = parser.parse_args()

    <span class="hljs-keyword">if</span> args.no_verify_ssl:
        print(<span class="hljs-string">"WARNING: SSL certificate verification is disabled. This is not recommended for production use."</span>)
        print(<span class="hljs-string">"To fix SSL issues properly, try updating your SSL certificates:"</span>)
        print(<span class="hljs-string">"pip install --upgrade certifi"</span>)
        print(<span class="hljs-string">"If issues persist, consult your system administrator or Python installation documentation."</span>)
        print()

    main(args.image_files, <span class="hljs-keyword">not</span> args.no_verify_ssl)
</code></pre>
<p>Going through this exercise gave me a much better sense of the kind of coding the Claude and other LLMs is capable of and how debugging can work.</p>
<p>For my next project, I might take all of the images from a specific year, extract the location, and create a KML file so I can <a target="_blank" href="https://support.google.com/earth/answer/7365595?hl=en&amp;co=GENIE.Platform%3DDesktop">visualize all of the images in Google Earth</a>. I have no doubt that I could get this done quickly.</p>
]]></content:encoded></item><item><title><![CDATA[How I Supercharged My Synology DS723+ for Adobe Lightroom]]></title><description><![CDATA[As a frequent photographer, my Synology DS723+ NAS is the heart of my digital workflow. Recently, I decided to upgrade it with NVMe SSDs and a 10GbE network card. These two modifications have dramatically improved my workflow, especially when working...]]></description><link>https://gregwilson.tech/how-i-supercharged-my-synology-ds723-for-adobe-lightroom</link><guid isPermaLink="true">https://gregwilson.tech/how-i-supercharged-my-synology-ds723-for-adobe-lightroom</guid><category><![CDATA[Synology]]></category><category><![CDATA[Synology NAS]]></category><category><![CDATA[adobe lightroom]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Thu, 15 Aug 2024 07:38:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723707341730/e49a756f-bc51-45d2-ac06-df1eb0ed0145.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a frequent photographer, my Synology DS723+ NAS is the heart of my digital workflow. Recently, I decided to upgrade it with NVMe SSDs and a 10GbE network card. These two modifications have dramatically improved my workflow, especially when working with Adobe Lightroom. Here's my experience with these upgrades and how they've transformed my photo editing process.</p>
<h2 id="heading-my-photography-workflow">My Photography Workflow</h2>
<p>Before diving into the upgrades, let me explain my setup. I use my DS723+ primarily as a centralized storage solution for my extensive photo library. Adobe Lightroom is my go-to software for accessing and editing these photos. Working with high-resolution RAW files demands maximum performance from both my storage and network.</p>
<p>Two upgrades I installed:</p>
<ul>
<li><p><a target="_blank" href="https://amzn.to/3SLy9oz">Synology M.2 2280 NVMe SSD SNV3410 400GB (SNV3410-400G)</a></p>
</li>
<li><p><a target="_blank" href="https://amzn.to/3yEwdY5">Synology Network Upgrade Module adds 1x 10GbE RJ-45 (E10G22-T1-Mini)</a></p>
</li>
</ul>
<p>(Both of these were available on Amazon)</p>
<h2 id="heading-upgrade-1-adding-nvme-ssds">Upgrade 1: Adding NVMe SSDs</h2>
<h3 id="heading-the-installation-process">The Installation Process</h3>
<p>Installing the NVMe SSDs was straightforward:</p>
<ol>
<li><p>I powered down the DS723+ and unplugged it.</p>
</li>
<li><p>Removed the cover (following the user manual).</p>
</li>
<li><p>Located the two M.2 NVMe slots on the bottom of the motherboard.</p>
</li>
<li><p>Inserted the SSDs and secured them.</p>
</li>
<li><p>Replaced the cover and powered on the NAS.</p>
</li>
<li><p>Used DSM (DiskStation Manager) to initialize and configure the new SSDs.</p>
</li>
</ol>
<h2 id="heading-upgrade-2-adding-a-10gbe-network-card">Upgrade 2: Adding a 10GbE Network Card</h2>
<h3 id="heading-the-installation-process-1">The Installation Process</h3>
<p>Adding the 10GbE network card was also straightforward:</p>
<ol>
<li><p>Powered down and unplugged the NAS.</p>
</li>
<li><p>Removed the cover.</p>
</li>
<li><p>Inserted the 10GbE card into the PCIe slot and secured it.</p>
</li>
<li><p>Replaced the cover and powered on the NAS.</p>
</li>
<li><p>Configured the new network interface in DSM and connect it to my 10Gb port on my router (<a target="_blank" href="https://store.ui.com/us/en/collections/unifi-dream-machine/products/udm-se">Ubiquity Dream Machine SE</a>)</p>
</li>
</ol>
<h2 id="heading-the-overall-impact-on-my-workflow">The Overall Impact on My Workflow</h2>
<p>These upgrades have truly transformed my Lightroom workflow. Tasks that used to take minutes now complete in seconds. Importing photos, generating previews, scrolling through my library, and exporting edited images - all these processes are now significantly faster.</p>
<p>More importantly, these performance improvements have made my editing process more enjoyable and creative. I'm no longer waiting for everything to catch up with my workflow. Instead, I can focus on my images.</p>
<p>The immediate caching of new uploads is particularly game-changing. After a long shoot, I can start working on my photos almost instantly after uploading them.</p>
<p>Other Synology NAS devices support these upgrades - see their website for details.</p>
]]></content:encoded></item><item><title><![CDATA[Playing with Claude 3.5 Sonnet for code generation and image alt-text descriptions]]></title><description><![CDATA[I had a little time to relax over the weekend, so I decided to try a small coding project with Anthropic Claude 3.5 Sonnet. I wanted to test the new 3.5 Sonnet model and also explore the artifacts feature. Claude 3.5 Sonnet is very good at describing...]]></description><link>https://gregwilson.tech/playing-with-claude-35-sonnet-for-code-generation-and-image-alt-text-descriptions</link><guid isPermaLink="true">https://gregwilson.tech/playing-with-claude-35-sonnet-for-code-generation-and-image-alt-text-descriptions</guid><category><![CDATA[claude.ai]]></category><category><![CDATA[AI Code Generator]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Thu, 15 Aug 2024 06:18:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723703167603/c432a51f-232c-4ac0-8402-3fb6f654e9ee.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I had a little time to relax over the weekend, so I decided to try a small coding project with Anthropic Claude 3.5 Sonnet. I wanted to test the new 3.5 Sonnet model and also explore the <a target="_blank" href="https://support.anthropic.com/en/articles/9487310-what-are-artifacts-and-how-do-i-use-them">artifacts feature</a>. Claude 3.5 Sonnet is very good at describing images that contain technical diagrams and screenshots, so I decided to try writting some code to auto-generate alt-text for images in any HTML file. Here's the thing -- I've never written Python code. I'm more of a Java and Node kind of guy, so by choosing Python, I was really testing my ability to generate code from scratch using some simple prompting.</p>
<p>Below is my chat with Claude. Although I'm not an expert at this type of prompting yet, I was able to hack my way through it. After most prompts, Claude created new code for me that was replaced after the subsequent prompts, so I omitted the early Claude responses to make this easier to read. I started by testing if I could get it to describe local images of varying formats (JPG, HEIC, PNG). Once this was successful, I expanded the scope to include parsing the images from HTML, injecting new ALT attributes, and more. This entire project took me only 20 minutes which I found super exciting. Even if I wrote in a programming language I'm intimately familiar with, I could have never completed this as fast because I'd have to figure out to handle PNGs, JPGs, HEIC, the Anthropic APIs, and more.</p>
<hr />
<p><strong>Greg</strong>: Are you familiar with the Claude API?</p>
<p><strong>Claude 3.5 Sonnet</strong>: Yes, I am familiar with the Claude API. It allows developers to integrate Anthropic's language models, including versions of me, into their applications and services. The API provides access to capabilities like natural language understanding, text generation, and task completion. However, I don't have detailed knowledge about its exact specifications, pricing, or implementation details. For the most up-to-date and accurate information about the Claude API, I'd recommend checking Anthropic's official documentation and resources.</p>
<p><strong>Greg</strong>: I have a list of image URLs. I need a 125 char description of each. Write python code to do this with Claude 3.5 Sonnet.<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: I will provide the list of image URLs in a file named images.csv<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: print each url as you read the file<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: Getting an error - 'url' found using 'type' does not match any of the expected tags: 'base64'"</p>
<p><strong>Claude 3.5 Sonnet</strong>: I apologize for the confusion. It seems the error is related to how we're passing the image URL to the Claude API. The API is expecting a base64-encoded image rather than a URL. Let's modify the script to handle this correctly. We'll need to download the image and convert it to base64 before sending it to the API. Here's an updated version of the script that should resolve this issue:</p>
<p><strong>Greg</strong>: This only handles jpg. I need it to also handle png<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: Output the results to a new csv file. The columns of the csv file should be (1) image url and (2) image description.<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: add gif support including animated gif<br />(Claude output omitted for brevity)</p>
<p>(I had to install PIL)</p>
<p><strong>Greg</strong>: Can you add webp support?<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: I want the option to pass a parameter to the program. The parameter will be a URL of a webpage. Read the web page and use all of the image URLs in the page instead of the input file currently used<br />(Claude output omitted for brevity)</p>
<p>(I had to install BeautifulSoup)</p>
<p><strong>Greg</strong>: Modify the program to output the same HTML that it reads but insert each of the image descriptions as the "alt" of the image tag.<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: write the output to stdout in addition to the file<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: Don't process img tags that already have the alt attribute<br />(Claude output omitted for brevity)</p>
<p><strong>Greg</strong>: Change the program to take a single parameter that is an HTML file. The output will continue to be stdout.</p>
<p>Resulting program (untouched by me):</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> anthropic
<span class="hljs-keyword">import</span> base64
<span class="hljs-keyword">import</span> argparse
<span class="hljs-keyword">import</span> sys
<span class="hljs-keyword">from</span> io <span class="hljs-keyword">import</span> BytesIO
<span class="hljs-keyword">from</span> urllib.parse <span class="hljs-keyword">import</span> urljoin
<span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-keyword">from</span> bs4 <span class="hljs-keyword">import</span> BeautifulSoup
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> requests

<span class="hljs-comment"># Initialize the Anthropic client</span>
client = anthropic.Anthropic(api_key=<span class="hljs-string">"your_api_key_here"</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">download_image</span>(<span class="hljs-params">url</span>):</span>
    response = requests.get(url)
    <span class="hljs-keyword">return</span> BytesIO(response.content)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">image_to_base64</span>(<span class="hljs-params">image</span>):</span>
    buffered = BytesIO()
    image.save(buffered, format=<span class="hljs-string">"PNG"</span>)
    <span class="hljs-keyword">return</span> base64.b64encode(buffered.getvalue()).decode(<span class="hljs-string">'utf-8'</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_image_type</span>(<span class="hljs-params">path</span>):</span>
    ext = os.path.splitext(path)[<span class="hljs-number">1</span>].lower()
    <span class="hljs-keyword">if</span> ext == <span class="hljs-string">'.png'</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-string">'image/png'</span>
    <span class="hljs-keyword">elif</span> ext <span class="hljs-keyword">in</span> [<span class="hljs-string">'.jpg'</span>, <span class="hljs-string">'.jpeg'</span>]:
        <span class="hljs-keyword">return</span> <span class="hljs-string">'image/jpeg'</span>
    <span class="hljs-keyword">elif</span> ext == <span class="hljs-string">'.gif'</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-string">'image/gif'</span>
    <span class="hljs-keyword">elif</span> ext == <span class="hljs-string">'.webp'</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-string">'image/webp'</span>
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-string">'image/jpeg'</span>  <span class="hljs-comment"># Default to JPEG if unable to determine</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">process_gif</span>(<span class="hljs-params">gif_image</span>):</span>
    <span class="hljs-keyword">if</span> gif_image.is_animated:
        gif_image.seek(<span class="hljs-number">0</span>)
    <span class="hljs-keyword">return</span> gif_image.convert(<span class="hljs-string">'RGB'</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_image_description</span>(<span class="hljs-params">image_path</span>):</span>
    <span class="hljs-keyword">try</span>:
        media_type = get_image_type(image_path)

        <span class="hljs-keyword">with</span> Image.open(image_path) <span class="hljs-keyword">as</span> img:
            <span class="hljs-keyword">if</span> media_type == <span class="hljs-string">'image/gif'</span>:
                img = process_gif(img)
            <span class="hljs-keyword">elif</span> media_type == <span class="hljs-string">'image/webp'</span> <span class="hljs-keyword">and</span> hasattr(img, <span class="hljs-string">'is_animated'</span>) <span class="hljs-keyword">and</span> img.is_animated:
                img.seek(<span class="hljs-number">0</span>)

            <span class="hljs-keyword">if</span> img.mode != <span class="hljs-string">'RGB'</span>:
                img = img.convert(<span class="hljs-string">'RGB'</span>)

            base64_image = image_to_base64(img)

        message = client.messages.create(
            model=<span class="hljs-string">"claude-3-sonnet-20240229"</span>,
            max_tokens=<span class="hljs-number">125</span>,
            temperature=<span class="hljs-number">0.7</span>,
            system=<span class="hljs-string">"You are a helpful assistant that provides concise image descriptions."</span>,
            messages=[
                {
                    <span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>,
                    <span class="hljs-string">"content"</span>: [
                        {
                            <span class="hljs-string">"type"</span>: <span class="hljs-string">"image"</span>,
                            <span class="hljs-string">"source"</span>: {
                                <span class="hljs-string">"type"</span>: <span class="hljs-string">"base64"</span>,
                                <span class="hljs-string">"media_type"</span>: <span class="hljs-string">"image/png"</span>,
                                <span class="hljs-string">"data"</span>: base64_image
                            }
                        },
                        {
                            <span class="hljs-string">"type"</span>: <span class="hljs-string">"text"</span>,
                            <span class="hljs-string">"text"</span>: <span class="hljs-string">"Describe this image in exactly 125 characters. If it's an animated image, describe the first frame."</span>
                        }
                    ]
                }
            ]
        )

        <span class="hljs-keyword">return</span> message.content[<span class="hljs-number">0</span>].text
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        <span class="hljs-keyword">return</span> <span class="hljs-string">f"Error processing image: <span class="hljs-subst">{str(e)}</span>"</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">process_html</span>(<span class="hljs-params">html_file</span>):</span>
    <span class="hljs-keyword">try</span>:
        <span class="hljs-keyword">with</span> open(html_file, <span class="hljs-string">'r'</span>, encoding=<span class="hljs-string">'utf-8'</span>) <span class="hljs-keyword">as</span> file:
            soup = BeautifulSoup(file, <span class="hljs-string">'html.parser'</span>)

        base_dir = os.path.dirname(os.path.abspath(html_file))
        img_tags = soup.find_all(<span class="hljs-string">'img'</span>)

        <span class="hljs-keyword">for</span> img <span class="hljs-keyword">in</span> img_tags:
            <span class="hljs-keyword">if</span> img.get(<span class="hljs-string">'alt'</span>):
                print(<span class="hljs-string">f"Skipping image with existing alt text: <span class="hljs-subst">{img.get(<span class="hljs-string">'src'</span>)}</span>"</span>, file=sys.stderr)
                <span class="hljs-keyword">continue</span>

            img_src = img.get(<span class="hljs-string">'src'</span>)
            <span class="hljs-keyword">if</span> img_src:
                img_path = os.path.join(base_dir, img_src)
                <span class="hljs-keyword">if</span> os.path.exists(img_path):
                    print(<span class="hljs-string">f"Processing image: <span class="hljs-subst">{img_path}</span>"</span>, file=sys.stderr)
                    description = get_image_description(img_path)
                    img[<span class="hljs-string">'alt'</span>] = description
                    print(<span class="hljs-string">f"Added description: <span class="hljs-subst">{description}</span>"</span>, file=sys.stderr)
                <span class="hljs-keyword">else</span>:
                    print(<span class="hljs-string">f"Image file not found: <span class="hljs-subst">{img_path}</span>"</span>, file=sys.stderr)

        <span class="hljs-keyword">return</span> soup.prettify()
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        print(<span class="hljs-string">f"Error processing HTML: <span class="hljs-subst">{str(e)}</span>"</span>, file=sys.stderr)
        <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>():</span>
    parser = argparse.ArgumentParser(description=<span class="hljs-string">"Generate image descriptions and modify HTML."</span>)
    parser.add_argument(<span class="hljs-string">"html_file"</span>, help=<span class="hljs-string">"Path to the HTML file to process"</span>)
    args = parser.parse_args()

    print(<span class="hljs-string">f"Processing HTML file: <span class="hljs-subst">{args.html_file}</span>"</span>, file=sys.stderr)
    modified_html = process_html(args.html_file)

    <span class="hljs-keyword">if</span> modified_html:
        <span class="hljs-comment"># Print to stdout</span>
        print(modified_html)
    <span class="hljs-keyword">else</span>:
        print(<span class="hljs-string">"Failed to process the HTML file."</span>, file=sys.stderr)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    main()
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Learning about video production and editing from ChatGPT and YouTube]]></title><description><![CDATA[I've been an active photographer most of my life, including a few years of doing it professionally (aerial , commercial, wedding, etc.). I recently sold my Nikon F-mount bodies and lenses and upgraded the new Nikon Z-series mirrorless platform. I acq...]]></description><link>https://gregwilson.tech/learning-about-video-production-and-editing-from-chatgpt-and-youtube</link><guid isPermaLink="true">https://gregwilson.tech/learning-about-video-production-and-editing-from-chatgpt-and-youtube</guid><category><![CDATA[llm]]></category><category><![CDATA[LLM's ]]></category><category><![CDATA[Nikon ]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[gemini]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[final cut pro]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Sun, 17 Sep 2023 19:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723697208992/62133546-34dd-4066-b3d5-80cde81321d6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723697195596/f350979c-3e06-4c2f-974b-c2fbedfba0e6.png" alt class="image--center mx-auto" /></p>
<p>I've been an active photographer most of my life, including a few years of doing it professionally (aerial , commercial, wedding, etc.). I recently sold my Nikon F-mount bodies and lenses and upgraded the new Nikon Z-series mirrorless platform. I acquired the new <a target="_blank" href="https://www.nikonusa.com/en/nikon-products/product/mirrorless-cameras/z-8.html?ref=gregsramblings.com">Nikon Z8</a> and several amazing Nikon Z lenses. The Z8 is an amazing camera for still photography, but it's also stunningly good at video – supporting 8K/60fps, 4K/120fps, in both 8-bit and 10-bit (<a target="_blank" href="https://www.nikonusa.com/en/nikon-products/product/mirrorless-cameras/z-8.html?ref=gregsramblings.com">full specs</a>). I knew the basics of video and could stumble through simple edits, but I had no idea how to do color grading on log footage, how to export and publish HDR, etc.. As I started to dig in, I quickly learned that it is WAY more complicated than I thought! The more I learned, the further away I felt from being competent!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689087448/cc52ccd5-8c0d-45af-9d36-32185d07c5b2.png" alt="Screenshot of Final Cut Pro" /></p>
<p>I decided to dive deep and found many great YouTube channels that cover these topics (see below), but it was <a target="_blank" href="https://chat.openai.com/?ref=gregsramblings.com">ChatGPT</a> that helped me fill in the gaps of my knowledge. This is the first time I used an AI chatbot to learn anything significant.</p>
<p>Below is my full conversation with ChatGPT (v4) that spanned several days. I was surprised at how efficient this type of learning is. I would watch a couple of YouTube videos then go to ChatGPT to clarify whatever was confusing me. I thought y'all might like to see how it looks to have this type of exchange with an AI bot:</p>
<p><a target="_blank" href="https://www.gregsramblings.com/chatgpt-teaches-me-about-video-standards/">My conversation with ChatGPT about video file formats, HDR, publishing, etc.</a></p>
<p>I also went to <a target="_blank" href="https://bard.google.com/?ref=gregsramblings.com">Google Bard</a> when I had a specific question about YouTube:</p>
<p><a target="_blank" href="https://g.co/gemini/share/a802e6738ee1">‎Does youtube support dolby vision?</a></p>
<p>I even played around with <a target="_blank" href="https://claude.ai/?ref=gregsramblings.com">Anthropic Claude</a> (doesn't have a sharing feature, so including a screenshot below):</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689091494/ee2f59e3-1d82-4565-b515-384f16b55282.png" alt /></p>
<p>Below are some great YouTube channels on video editing that I follow - mostly about Apple Final Cut Pro. I chose Final Cut Pro to take the maximum advantage of my Macbook Pro M2 and because it's HDR workflow is nice. Adobe Premier and DaVinci Resolve are fantastic options as well.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/@stalman?ref=gregsramblings.com">https://www.youtube.com/@stalman?ref=gregsramblings.com</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/@JennJager?ref=gregsramblings.com">https://www.youtube.com/@JennJager?ref=gregsramblings.com</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/@Serge1913?ref=gregsramblings.com">https://www.youtube.com/@Serge1913?ref=gregsramblings.com</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/@TamaraGabrielUK?ref=gregsramblings.com">https://www.youtube.com/@TamaraGabrielUK?ref=gregsramblings.com</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/@TheFinalCutBro?ref=gregsramblings.com">https://www.youtube.com/@TheFinalCutBro?ref=gregsramblings.com</a></div>
<p> </p>
<p>I still have a lot to learn, but I'm at least able to create something cool. For example:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=rLHW76xjJbA">https://www.youtube.com/watch?v=rLHW76xjJbA</a></div>
<p> </p>
<p>Lastly, I'll point out that through this entire learning exercise, I never once looked at the docs for Final Cut Pro, which is telling. Obviously those docs were part of the training content for ChatGPT, Bard, Claude, etc., but I never felt the need to go to the source.</p>
<p>:)</p>
]]></content:encoded></item><item><title><![CDATA[Flying the new DJI Mavic 3 Pro]]></title><description><![CDATA[There are tons of videos of folks raving about the new DJI Mavic 3 Pro. I can confirm that it is indeed awesome.
I shot the videos below using the new 3x 70mm camera shot in 4k/60fps HDR (HLG) with some very minor color correction and contrast boost ...]]></description><link>https://gregwilson.tech/flying-the-new-dji-mavic-3-pro</link><guid isPermaLink="true">https://gregwilson.tech/flying-the-new-dji-mavic-3-pro</guid><category><![CDATA[4K video]]></category><category><![CDATA[DJI]]></category><category><![CDATA[drone]]></category><category><![CDATA[HDR]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Fri, 21 Jul 2023 19:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723698067425/bdda0ef0-97ab-40f5-a55b-0f520a4c3fba.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are tons of videos of folks raving about the new DJI Mavic 3 Pro. I can confirm that it is indeed awesome.</p>
<p>I shot the videos below using the new 3x 70mm camera shot in 4k/60fps HDR (HLG) with some very minor color correction and contrast boost in Final Cut Pro.</p>
<p>Make sure it is showing in 4k.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=rLHW76xjJbA">https://www.youtube.com/watch?v=rLHW76xjJbA</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=6Rq4YUmzQoY">https://www.youtube.com/watch?v=6Rq4YUmzQoY</a></div>
<p> </p>
<p>Tons of features to explore!</p>
]]></content:encoded></item><item><title><![CDATA[Gmail shortcuts will quadruple your speed]]></title><description><![CDATA[Recently I’ve been experimenting with using keyboard shortcuts in Gmail to avoid having to remove a hand from the keyboard to reach for the mouse or trackpad. It took some practice, but now it comes naturally and I love the increased efficiency.
I te...]]></description><link>https://gregwilson.tech/gmail-shortcuts-will-quadruple-your-speed</link><guid isPermaLink="true">https://gregwilson.tech/gmail-shortcuts-will-quadruple-your-speed</guid><category><![CDATA[gmail]]></category><category><![CDATA[mouseless]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Sat, 30 Jul 2022 19:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723698413502/2233b1b2-4484-4f59-b7b2-d21832c54a1f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently I’ve been experimenting with using keyboard shortcuts in Gmail to avoid having to remove a hand from the keyboard to reach for the mouse or trackpad. It took some practice, but now it comes naturally and I love the increased efficiency.</p>
<p>I tend to like keyboard shortcuts because I’ve spent decades using the vi text editor to write code. <a target="_blank" href="https://en.wikipedia.org/wiki/Vi?ref=gregsramblings.com">Vi</a> was originally created in the mid-1970s for the Unix operating system before anyone used a mouse or trackpad, so all of the functionality was implemented as keyboard shortcuts. Because your hands never leave the keyboard, programmers using vi tend to be very fast, especially if they master all of the features. As a result, it’s still a popular editor with programmers.</p>
<h2 id="heading-turning-on-gmail-keyboard-shortcuts">Turning on Gmail keyboard shortcuts</h2>
<p>The first thing you need to do is make sure you have keyboard shortcuts turned on. Go to settings , See all settings, then scroll down to keyboard shortcuts and turn them on.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689107860/03a9748e-6220-4391-b401-63abeebf992a.png" alt="Screenshot showing how to turn on keyboard shortcuts" /></p>
<h2 id="heading-press-when-in-gmail-and-youll-get-this">Press "?" when in Gmail and you'll get this:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689109080/ca4db970-e406-4bf3-8f18-d9243408a850.png" alt="Screenshot of all Gmail shortcuts accessible from inside of Gmail by typing &quot;?&quot;" class="image--center mx-auto" /></p>
<h2 id="heading-shortcuts-i-use-when-viewing-my-inbox">Shortcuts I use when viewing my inbox</h2>
<ul>
<li>You’ve probably never noticed it before, but there is a thin blue bar on the left of the first email in your inbox.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689110153/d5f75f5f-b911-4541-9efb-7203ec71ff9d.png" alt="Screenshot showing the selected email indicator" class="image--center mx-auto" /></p>
<ul>
<li><p>Press the <strong>j</strong> and <strong>k</strong> keys a few times to make the blue bar go up and down the list.</p>
</li>
<li><p>To open the email, press <strong>o</strong> or <strong>enter</strong>.</p>
</li>
<li><p>To select the currently highlighted email, press the <strong>x</strong> key. You’ll see the checkbox beside the email become selected. Once you select one of more emails, you can take action on it to archive, forward, reply, etc. using the shortcuts below.</p>
</li>
<li><p>To search your emails, hit the forward slash <strong>/</strong></p>
</li>
</ul>
<h2 id="heading-shortcuts-to-use-when-viewing-an-individual-email-conversation">Shortcuts to use when viewing an individual email conversation</h2>
<p>If your current workflow is to open an email, take an action, then go back to the inbox to open the next email, using the following shortcuts will save you a lot of time! While looking at an individual email, you can press:</p>
<ul>
<li><p><strong>j</strong> and <strong>k</strong> to go to the older/newer email conversation.</p>
</li>
<li><p><strong>{</strong> and <strong>}</strong> is similar to <strong>j</strong> and <strong>k</strong> but the current email conversation will be archived before moving to the older/newer one.</p>
</li>
</ul>
<p>I tend to use <strong>{</strong> and <strong>}</strong> more frequently because I try to handle each email as I get to it rather than postponing action.</p>
<h2 id="heading-taking-action-on-the-currently-opened-email">Taking action on the currently opened email</h2>
<ul>
<li><p><strong>r</strong> to start a reply or <strong>a</strong> to start a reply-all and put your cursor in the body so you can immediately start typing.</p>
</li>
<li><p><strong>f</strong> to forward the email.</p>
</li>
<li><p><strong>s</strong> to star the email. If you have multiple stars set up — press s multiple times to toggle between different stars.</p>
</li>
<li><p><strong>;</strong> will expand all emails in the current conversation. When viewing an email conversation with multiple emails, Gmail collapses the emails that you have seen before, which is mostly awesome, but sometimes I want to re-read the entire conversation from the top. This shortcut eliminates the need to expand each email in the conversation one by one.</p>
</li>
<li><p><strong>!</strong> to mark as spam</p>
</li>
<li><p><strong>#</strong> to delete</p>
</li>
<li><p><strong>u</strong> will go back to the inbox</p>
</li>
<li><p><strong>m</strong> will mute the current conversation — When you mute a conversation, new messages added to the conversation bypass your inbox so that the conversation stays archived. Muted conversations will only pop back into your inbox ready for your attention if a new message in the conversation is addressed to you and no one else, or if you’re added to the “To” or “Cc” line in a new message.</p>
</li>
<li><p>And don’t forget <strong>z</strong> to undo the last action!</p>
</li>
</ul>
<h2 id="heading-a-real-world-scenario">A real-world scenario</h2>
<p>It’s Tuesday and I’m late to work due to my dog’s vet appointment. When I finally arrive and open up Gmail, I have 53 unread emails. Here’s how I attack it:</p>
<p>I open the first email in my main inbox and then start using keyboard shortcuts to work the list. My keystrokes look something like this:</p>
<p><strong>{{{ss{{s{{a(send reply){{a(send reply){;{f(forward email){{{sss{;m{{{r(send reply)s{s{{s{{f(forward email){{{{a(send reply){u</strong></p>
<p>Yes, it looks cryptic, but once you get used to these shortcuts, you’ll be amazed at how fast you can work an inbox.</p>
<p>Here’s a breakdown of the first few keystrokes:</p>
<ul>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>s</strong> star the email</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>s</strong> star the email</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>a</strong> reply-all to the current email</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>a</strong> reply-all to the current email</p>
</li>
<li><p><strong>{</strong> archive the email and move to the next one in my inbox</p>
</li>
<li><p><strong>;</strong> expand all emails in the current conversation</p>
</li>
<li><p><strong>{</strong> archive the email and moved to the next one in my inbox</p>
</li>
<li><p><strong>f</strong> forward the email</p>
</li>
<li><p>…and so on</p>
</li>
</ul>
<p>In this example, I started with the newest email and worked my way back in time (newest to oldest).</p>
<p>There are many other shortcuts that I’m not covering in this post, but you can see the full list by pressing “?” in Gmail.</p>
]]></content:encoded></item><item><title><![CDATA[Play Chess via messaging apps]]></title><description><![CDATA[While watching The Queen’s Gambit, I was reminded about how much I used to enjoy playing chess. I was eager to play a game, so I started to tweet, “D2-D4” knowing that some of Twitter followers would recognize this as an opening move and likely respo...]]></description><link>https://gregwilson.tech/play-chess-via-messaging-apps</link><guid isPermaLink="true">https://gregwilson.tech/play-chess-via-messaging-apps</guid><category><![CDATA[chess]]></category><category><![CDATA[chessboard.js]]></category><category><![CDATA[chess online]]></category><category><![CDATA[google cloud run]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Sat, 30 Jul 2022 19:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689105854/c6335a88-d079-4975-8d5b-0f8ce7801f9f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While watching <a target="_blank" href="https://www.netflix.com/title/80234304?ref=gregsramblings.com">The Queen’s Gambit</a>, I was reminded about how much I used to enjoy playing chess. I was eager to play a game, so I started to tweet, “D2-D4” knowing that some of Twitter followers would recognize this as an opening move and likely respond with their move, giving me the fix I needed. I paused before hitting the tweet button because I realized that I’d need to set up a board (physical or virtual) to keep track of the game. If I received multiple responses, I’d need multiple boards. I decided not to send the tweet.</p>
<p>Later in the day, I had the idea to create a simple service that enables my use case. Instead of designing yet another chess site (I <a target="_blank" href="https://www.youtube.com/watch?v=dP69x3Bg-mY&amp;ref=gregsramblings.com">built one</a> with a friend in 2009 that is long since gone), I decided to create a chess board logger and visualizer to make it practical to play via Twitter or any other messaging/social platform. I didn't have a lot of time, so I was optimizing for simplicity.</p>
<p>Instead of tweeting moves back and forth, players tweet links back and forth, and those links go to a site that renders the current chessboard, allows a new move, and creates a new link to paste back to the opponent. I wanted this to be 100% serverless, meaning that it will scale to zero and have zero maintenance requirements. Excited about this idea, I put together a shopping list:</p>
<p><strong>My MVP requirements:</strong></p>
<ul>
<li><p>Represent the board position—ideally completely in the URL to keep it stateless from a server-side perspective</p>
</li>
<li><p>Display a chessboard and let the player make their next move.</p>
</li>
</ul>
<p><strong>Stretch goals:</strong></p>
<ul>
<li><p>Enforce chess rules (allow only legal moves).</p>
</li>
<li><p>Dynamically create a png/jpg of the chessboard that I can use as an <a target="_blank" href="https://ogp.me/?ref=gregsramblings.com">Open Graph</a> and <a target="_blank" href="https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/summary-card-with-large-image?ref=gregsramblings.com">Twitter card</a> image so that when a player sends the link, the image of the board will automatically display.</p>
</li>
</ul>
<h2 id="heading-putting-it-all-together">Putting it all together</h2>
<h3 id="heading-representing-the-board-position">Representing the board position</h3>
<p>There is a standard notation for describing a particular board position of a chess game called <a target="_blank" href="https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation?ref=gregsramblings.com">Forsyth–Edwards Notation (FEN)</a> that was exactly what I needed. A FEN is a sequence of ASCII characters. For example, the starting position for any chess game can be represented by the following string:</p>
<p><code>rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1</code></p>
<p>Each letter is a piece: pawn = "P", knight = "N", bishop = "B", rook = "R", queen = "Q" and king = "K". Uppercase letters represent white pieces and lowercase letters represent black. The last part of the string is specific to certain rules in chess (<a target="_blank" href="https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation?ref=gregsramblings.com">read more about FEN</a>).</p>
<p>I knew I could use this in the URL, so my first requirement was complete and I was able to represent the board state in the URL eliminating the need for a backend data store.</p>
<h3 id="heading-displaying-the-chessboard-and-allowing-drag-and-drop-moves">Displaying the chessboard and allowing drag-and-drop moves</h3>
<p>Numerous chess libraries are available. One in particular that caught my eye was <a target="_blank" href="https://github.com/oakmac/chessboardjs/?ref=gregsramblings.com">chessboard.js</a>—described as “<strong>a JavaScript chessboard component with a flexible ‘just a board’ API</strong>”. I quickly discovered that this library can display chess boards from a FEN, allow pieces to be moved, and update the FEN. Perfect!</p>
<p>In only two hours, I had the basic functionality implemented.</p>
<h3 id="heading-enforcing-chess-rules">Enforcing chess rules</h3>
<p>I originally thought that making this service aware of chess rules would be difficult, but then I saw the <a target="_blank" href="https://chessboardjs.com/examples?ref=gregsramblings.com#5000">example in the chessboard.js docs</a> showing how to integrate it with another library called <a target="_blank" href="https://github.com/jhlywa/chess.js?ref=gregsramblings.com">chess.js</a>—“<strong>a JavaScript chess library that is used for chess move generation/validation, piece placement/movement, and check/checkmate/stalemate detection—basically everything but the AI</strong>”. A short time later, I had it working! Stretch goal #1 completed.</p>
<p>Here's what a couple of game moves look like:</p>
<p>Moving the pawn from D2 to D4 in a new game—<a target="_blank" href="https://chessmsgs.com/?fen=rnbqkbnr%2Fpppppppp%2F8%2F8%2F3P4%2F8%2FPPP1PPPP%2FRNBQKBNR+b+KQkq+d3+0+1&amp;to=d4&amp;from=d2&amp;gid=mOhlhRlMboYsHLqBF1f7I&amp;ref=gregsramblings.com">https://chessmsgs.com/?fen=rnbqkbnr%2Fpppppppp%2F8%2F8%2F3P4%2F8%2FPPP1PPPP%2FRNBQKBNR+b+KQkq+d3+0+1&amp;to=d4&amp;from=d2&amp;gid=mOhlhRlMboYsHLqBF1f7I</a></p>
<p>Black countering with a similar move of pawn from D7 to D5—<a target="_blank" href="https://chessmsgs.com/?fen=rnbqkbnr%2Fppp1pppp%2F8%2F3p4%2F3P4%2F8%2FPPP1PPPP%2FRNBQKBNR+w+KQkq+d6+0+2&amp;to=d5&amp;from=d7&amp;gid=mOhlhRlMboYsHLqBF1f7I&amp;ref=gregsramblings.com">https://chessmsgs.com/?fen=rnbqkbnr%2Fppp1pppp%2F8%2F3p4%2F3P4%2F8%2FPPP1PPPP%2FRNBQKBNR+w+KQkq+d6+0+2&amp;to=d5&amp;from=d7&amp;gid=mOhlhRlMboYsHLqBF1f7I</a></p>
<p>The URL has the following data:</p>
<ul>
<li><p><strong>fen</strong>—the new board position</p>
</li>
<li><p><strong>from</strong> and <strong>to</strong>—indicating what move occurred (I use this to highlight the squares)</p>
</li>
<li><p><strong>gid</strong>—a unique game ID (I used <a target="_blank" href="https://www.npmjs.com/package/nanoid?ref=gregsramblings.com">nanoid</a>)—I’ll use this to connect moves to a single game in the future. For example, I could add a feature that lets the user request the entire game transcript).</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689103076/d76394bd-c580-4bae-9620-1c3fa387e19c.png" alt="screenshot of chessmsgs.com" /></p>
<h3 id="heading-done-except">Done! Except...</h3>
<p>At this point, there were no server requirements other than simple HTML static hosting. But after playing it with some friends and family, I decided that I really wanted to accomplish the other stretch goal—dynamically create a png/jpg of the chessboard that I can use as an Open Graph and Twitter card image.  With this capability, an image of the board will automatically display when a player sends the link. Without it, the game is a series of ugly URLs.</p>
<h3 id="heading-dynamically-creating-the-open-graph-image">Dynamically creating the Open Graph image</h3>
<p>This requirement introduced some server-side requirements. I needed two things to happen on the server.</p>
<p>First, I needed to dynamically generate a board image from a FEN. Once again, open source to the rescue (almost). I found <a target="_blank" href="https://github.com/andyruwruw/chess-image-generator?ref=gregsramblings.com">chess-image-generator</a>, a JavaScript library that creates a png from a <a target="_blank" href="https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation?ref=gregsramblings.com">FEN</a>. I wrapped this in a bit of Node.js/Express code so that I could access the image as if it were static. For example, here’s a demo of the real endpoint: <a target="_blank" href="https://chessmsgs.com/fenimg/v1/rnbqkb1r/ppp1pppp/5n2/3p4/3P4/2N5/PPP1PPPP/R1BQKBNR%20w%20KQkq%20-%202%203.png?ref=gregsramblings.com">https://chessmsgs.com/fenimg/v1/rnbqkb1r/ppp1pppp/5n2/3p4/3P4/2N5/PPP1PPPP/R1BQKBNR w KQkq - 2 3.png</a>. This link results in this image:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689104188/00c4a1a5-b5e5-486f-8910-998b088d5130.png" alt="Dynamically created chessboard in chessmsgs.com" class="image--center mx-auto" /></p>
<p>Second, I needed to dynamically inject this FEN-embedded URL into the content attribute of the meta tag in the main HTML. Like me, you might be thinking that you could just do some DOM manipulation in JavaScript and avoid having to dynamically change HTML on the server. But, the Open Graph image is retrieved by a bot from whatever service you use for messaging. These bots don’t execute any client-side JavaScript and expect all values to be static. So, that led to additional server-side work.</p>
<p>I needed to dynamically convert this:</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">meta</span> <span class="hljs-attr">property</span>=<span class="hljs-string">"og:url"</span> <span class="hljs-attr">content</span>=<span class="hljs-string">"{{url}}"</span> /&gt;</span>
</code></pre>
<p>Into something like this:</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">meta</span> <span class="hljs-attr">property</span>=<span class="hljs-string">"og:url"</span> <span class="hljs-attr">content</span>=<span class="hljs-string">"https://chessmsgs.com/?fen=rnbqkb1r/ppp1pppp/5n2/3p4/3P4/2N5/PPP1PPPP/R1BQKBNR+w+KQkq+-+2+3&amp;to=f6&amp;from=g8&amp;gid=ziL3VfMEoIT9iNwp6csBh"</span> /&gt;</span>
</code></pre>
<p>I could have used one of many Node templating engines to do this, but they all seemed like overkill for this simple substitution requirement, so I just wrote a few lines of code for some string.replace() calls in my Node server.</p>
<p>With this functionality added, a game on Twitter (and other services) now looks much better:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689104993/0c0bf913-a1f6-43a6-830b-cb29176f5b5d.png" alt /></p>
<h3 id="heading-check-out-the-code">Check out the code</h3>
<p>The source for chessmsgs.com is available on GitHub at <a target="_blank" href="https://github.com/gregsramblings/chessmsgs?ref=gregsramblings.com">https://github.com/gregsramblings/chessmsgs</a>.</p>
<h3 id="heading-deciding-where-to-host-it">Deciding where to host it</h3>
<p>The hosting requirements are simple. I needed support for Node.js/Express, domain mapping, and SSL. I was working at Google Cloud at the time and wanted to go completely serverless, which quickly led me to <a target="_blank" href="https://cloud.google.com/run?ref=gregsramblings.com">Cloud Run</a>. Cloud Run is a managed platform that enables you to run stateless containers that are invocable via web requests or Pub/Sub events.</p>
<h3 id="heading-whats-next">What’s next?</h3>
<p>If I really wanted to engineer this for extreme loads, I could easily deploy it to multiple regions throughout the globe and set up a load balancer and possibly a CDN.</p>
<p>When I first started thinking about the image generation, I naturally thought about caching many common images in Google Cloud Storage. This would be easy to do and storage is crazy cheap. But, then I did a bit of research and learned the following fun facts. After two moves (one move for each player), there are 400 different distinct board positions. After each player moves again (two moves each), this number is now 71,782 distinct positions. After each player moves again (three moves each), the number is now 9,132,484 distinct positions! I could gain a bit of performance by caching the most popular openings, but each game would quickly go beyond the cached images so it didn’t seem worth it. By the way, to cache every possible board position would be about 1046 positions, which is a <strong>massive</strong> number that doesn’t even have a name.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>This was a fun project – almost therapeutic for me since my “day job” doesn’t allow much time for writing code.</p>
<p>Now that I work at AWS (managing technical content, documentation, SDKs, CLI), I might use this as an excuse to play with Lambda. I just need more time!</p>
]]></content:encoded></item><item><title><![CDATA[Unsend your mistake in Gmail]]></title><description><![CDATA[By default, Gmail allows you to unsend an email within 5 seconds of clicking the send button. As soon as you click send, you’ll see a dialog in the bottom-left corner with an “Undo” link. If you’re on your mobile device, you’ll see a similar dialog a...]]></description><link>https://gregwilson.tech/unsend-your-mistake-in-gmail</link><guid isPermaLink="true">https://gregwilson.tech/unsend-your-mistake-in-gmail</guid><category><![CDATA[gmail]]></category><category><![CDATA[undo]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Thu, 28 Jul 2022 19:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723698281809/1a4670e9-7859-40f2-83db-03cea9a77c0e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>By default, Gmail allows you to unsend an email within 5 seconds of clicking the send button. As soon as you click send, you’ll see a dialog in the bottom-left corner with an “Undo” link. If you’re on your mobile device, you’ll see a similar dialog at the bottom.</p>
<p>I found that 5 seconds is a bit fast. I usually don’t realize I’ve accidentally done a reply-all (or worse mistakes) until a few seconds later. Thankfully, Gmail allows you to configure the delay.</p>
<p>Go to Settings -&gt; See all settings -&gt;</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689122427/9d41f761-bab6-4491-8c09-95cd3f821cf3.png" alt /></p>
]]></content:encoded></item><item><title><![CDATA[Chrome feature: Mute site]]></title><description><![CDATA[Some websites insist on auto-playing video/audio content and it drives me nuts. I recently discovered that Chrome now allows you to mute a site. The screenshot below illustrates.

If content starts playing, you will see a small icon in the tab indica...]]></description><link>https://gregwilson.tech/chrome-feature-mute-site</link><guid isPermaLink="true">https://gregwilson.tech/chrome-feature-mute-site</guid><category><![CDATA[Google Chrome]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Thu, 28 Jul 2022 19:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723698220366/742b88ba-b046-4062-a4b9-33862a1205ce.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Some websites insist on auto-playing video/audio content and it drives me nuts. I recently discovered that Chrome now allows you to mute a site. The screenshot below illustrates.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723689118414/a9ab3ffe-947b-44d1-94c3-1a5f5f30a742.png" alt="Screenshot showing how to mute a specific tab in Chrome browser" /></p>
<p>If content starts playing, you will see a small icon in the tab indicating that it’s muted.</p>
]]></content:encoded></item><item><title><![CDATA[Bricking a Mars probe]]></title><description><![CDATA[Like many of you, I love following space exploration and have always been a space program fanatic. When I was a kid, I knew more details about the Mercury, Gemini and Saturn programs than any other kid or teacher in school.
The recent Curiosity progr...]]></description><link>https://gregwilson.tech/bricking-a-mars-probe</link><guid isPermaLink="true">https://gregwilson.tech/bricking-a-mars-probe</guid><category><![CDATA[space exploration]]></category><dc:creator><![CDATA[Greg Wilson]]></dc:creator><pubDate>Mon, 06 Apr 2020 07:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724298266412/639a4749-c4f6-4663-91ba-d581241da52e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Like many of you, I love following space exploration and have always been a space program fanatic. When I was a kid, I knew more details about the Mercury, Gemini and Saturn programs than any other kid or teacher in school.</p>
<p>The recent Curiosity program got me thinking back about the old Mars Viking program. In 1976, NASA landed two probes on Mars that sent back the first pictures from the surface of the planet in history. This was the first time a human-created device had landed on the planet, so there were a ton of unknowns. I was very young, but I remember waiting for those first pictures from Mars, anxious to find out if the little green men would be friendly.</p>
<p>Here’s the first panorama ever sent from Mars (shot by Viking 1 lander – click it to go to the source and high-res versions):</p>
<p><a target="_blank" href="https://upload.wikimedia.org/wikipedia/commons/1/10/Mars_Viking_12a002.png"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/10/Mars_Viking_12a002.png/800px-Mars_Viking_12a002.png" alt /></a></p>
<p>I was reading about the Viking missions and stumbled on a little known, yet fascinating fact about the Viking 1 lander. On November 11, 1982, an over-the-air software update was broadcast to the Viking 1 lander to correct a battery charging issue. The update had an error in it and accidentally overwrote the wrong bits resulting in a bricked lander. Basically, they accidentally overwrote the memory reserved for the antenna pointing code which immediately terminated all communications. The mission had already been a huge success, so it wasn’t a huge deal in the grand scheme of things, but I bet those responsible for the error had a big “oh crap!, we just killed the probe!” moment. I’d love to meet these folks. I bet it’s a great story. I imagine a few NASA engineers in a conference room going through a list of memory addresses that were part of the update, trying to figure out why they didn’t get any acknowledgement from the probe, and then finding the mistake and looking at each other in shocked disbelief and fear.</p>
<p>I can remember working from home on a production system and accidentally typing in the wrong iptables command resulting in the instant inability to connect to the server. At least I was able to drive 30 minutes to the data center and connect a console to the server and undo my mistake!</p>
<h2 id="heading-russian-oops-phobos-1">Russian oops – Phobos 1</h2>
<center><a href="https://upload.wikimedia.org/wikipedia/commons/0/01/Phobos_Marte.jpg"><img src="https://upload.wikimedia.org/wikipedia/commons/0/01/Phobos_Marte.jpg" /></a></center>

<p>I also learned that a similar mistake happened with the Russian Phobos 1 orbital mars mission. From <a target="_blank" href="http://en.wikipedia.org/wiki/Phobos_program">http://en.wikipedia.org/wiki/Phobos_program</a> —</p>
<blockquote>
<p>“Phobos 1 operated nominally until an expected communications session on September 2, 1988 failed to occur. The failure of controllers to regain contact with the spacecraft was traced to an error in the software uploaded on August 29/August 30, which had deactivated the attitude thrusters. By losing its lock on the Sun, the spacecraft could no longer properly orient its solar arrays, thus depleting its batteries.</p>
<p>Software instructions to turn off the probe’s attitude control, normally a fatal operation, were part of a routine used when testing the spacecraft on the ground. Normally this routine would be removed before launch. However, the software was coded in PROMs, and so removing the test code would have required removing and replacing the entire computer. Because of time pressure from the impending launch, engineers decided to leave the command sequence in, though it should never be used. However, a single-character error in constructing an upload sequence resulted in the command executing, with subsequent loss of the spacecraft.”</p>
</blockquote>
<p>Oops!</p>
<h2 id="heading-software-heroes-galileo">Software Heroes -– Galileo</h2>
<center><a href="https://en.wikipedia.org/wiki/Galileo_(spacecraft)" target="_blank"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/9c/Artwork_Galileo-Io-Jupiter.JPG/1920px-Artwork_Galileo-Io-Jupiter.JPG" /></a></center>

<p>I also stumbled on a fascinating story about the NASA Galileo Jupiter probe launched in 1989. During it’s long trip to Jupiter, the high-gain antenna failed to open, even after many genius attempts at spinning the probe to free it (among other wild tactics). This meant that NASA was stuck using the low-gain antenna. The difference in bandwidth was dramatic — 10bps vs 134kbps (10,000 times slower)! While the crippled probe was hurtling on the way to Jupiter, software engineers had to do some serious software rewriting to try and salvage as much of the mission as possible. They rewrote the encoding/decoding software to get better compression and optimized the image compression algorithms among many other improvements. Then they had to push these updates to the probe. Can you imagine the stress? Amazing stuff. More details at <a target="_blank" href="https://www.nasa.gov/pdf/546504main_42s_galilieo_rocky_road_jupiter.pdf">https://www.nasa.gov/pdf/546504main_42s_galilieo_rocky_road_jupiter.pdf</a> and at <a target="_blank" href="http://www.lpi.usra.edu/publications/newsletters/lpib/lpib76/gal76.html">http://www.lpi.usra.edu/publications/newsletters/lpib/lpib76/gal76.html</a>.</p>
<p>Software developers that write code for spacecraft are my heroes. They work under incredible stress and have to cope with extremely low bandwidth, very limited resources, a non-touchable, non-upgradable hardware platform and they typically are writing very low-level code. There are engineers still occasionally sending updates to the Voyager probes launched in the ’70s! They communicate at 160bps and it takes 16 hours for any instructions to be received. I get impatient when a git pull takes more than 5 seconds!</p>
<p>Lastly, I was reminded by a friend of yet another mission where engineers saved the day, and this one was fairly recent (2004) — The Huygens Titan probe. Read all about it at <a target="_blank" href="http://www.thespacereview.com/article/306/1">http://www.thespacereview.com/article/306/1</a></p>
]]></content:encoded></item></channel></rss>