<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Developer Tools on Mark Wolfe&#39;s Blog</title>
    <link>https://www.wolfe.id.au/tags/developer-tools/</link>
    <description>Recent content in Developer Tools on Mark Wolfe&#39;s Blog</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Sun, 07 Dec 2025 08:55:22 +1000</lastBuildDate><atom:link href="https://www.wolfe.id.au/tags/developer-tools/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>How I Work with AI Coding Agents</title>
      <link>https://www.wolfe.id.au/2025/12/07/how-i-work-with-ai-coding-agents/</link>
      <pubDate>Sun, 07 Dec 2025 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2025/12/07/how-i-work-with-ai-coding-agents/</guid>
      <description>&lt;p&gt;For anyone who has been following AI and software development, things are changing rapidly, this includes how we build software.&lt;/p&gt;
&lt;p&gt;Over the last few months, I have found myself going from working alone to working with an AI agent, such as &lt;a href=&#34;https://claude.ai/&#34;&gt;Anthropic&amp;rsquo;s Claude&lt;/a&gt;, &lt;a href=&#34;https://openai.com/codex/&#34;&gt;OpenAI&amp;rsquo;s Codex&lt;/a&gt;, &lt;a href=&#34;https://ampcode.com/&#34;&gt;amp&lt;/a&gt; or &lt;a href=&#34;https://geminicli.com/&#34;&gt;Google&amp;rsquo;s Gemini CLI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This change has been both exciting and challenging. With the help of this AI agent, I have been able to delegate tasks and focus on the most important things.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For anyone who has been following AI and software development, things are changing rapidly, this includes how we build software.</p>
<p>Over the last few months, I have found myself going from working alone to working with an AI agent, such as <a href="https://claude.ai/">Anthropic&rsquo;s Claude</a>, <a href="https://openai.com/codex/">OpenAI&rsquo;s Codex</a>, <a href="https://ampcode.com/">amp</a> or <a href="https://geminicli.com/">Google&rsquo;s Gemini CLI</a>.</p>
<p>This change has been both exciting and challenging. With the help of this AI agent, I have been able to delegate tasks and focus on the most important things.</p>
<p>With a shift in mindset, I&rsquo;ve been able to delegate tasks more effectively and get predictable outcomes.</p>
<h2 id="the-death-of-the-chat-box">The death of the Chat Box</h2>
<p>Over the last few months, I have found myself moving away from transactional interactions with AI agents via a chat box, to a more collaborative approach. Instead of asking the AI agent to fix an issue and then reviewing the results, I am now working in a more iterative way. This has led me to follow a more <a href="https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html">specification driven development process</a> which is a great way to ensure more predictable and reliable results.</p>
<p>This process looks like:</p>
<ol>
<li>Provide details of the problem, feature, or bug, then work with the agent to put together a plan.</li>
<li>Review the plan, remove any unnecessary steps and focus on the most important ones. I then ask the agent to export the plan to a specification in a <a href="https://daringfireball.net/projects/markdown/">markdown</a> file in the codebase.</li>
<li>I then clear the context (<code>/clear</code> in claude code) and get the agent to review the specification and provide feedback. This typically highlights a few areas that need to be addressed.</li>
<li>If the specification looks good, and I am clear on the outcomes, then I instruct the agent to start work on the specification, this typically follows one or more phases.</li>
<li>I then do some testing, review the results and provide feedback.</li>
<li>I clear the context and get the agent to review the specification and the outcomes, then we update it with the results.</li>
<li>Finally, I clear the context and get the agent to review the outcomes and provide feedback, for code this is done using a code review skill or sub agent. Once we have completed this process I can commit the changes to the codebase.</li>
</ol>
<p>This process is especially useful for most tasks, building new features, or refactoring existing ones, but I find a scaled back version of this process is even useful for small tasks, to ensure the agent is kept on track.</p>
<p><strong>NOTE:</strong> During long conversations with the AI Agent it is important to keep the context clear otherwise it will fill up with irrelevant discussions which impacts the performance and distracts the agent.</p>
<h2 id="documentation-is-king">Documentation is King</h2>
<p>Documentation is the backbone of any software project and with the rise of AI, it is becoming even more important. The ability to quickly and easily create a specification, then iterate on it, and use it to drive the development process with an AI agent is key to ensuring work stays on track.</p>
<p>Why is maintaining a specification important?</p>
<ul>
<li>It helps you establish a clear goal and plan for what the AI agent is going to do.</li>
<li>It provides a reference point for other developers and stakeholders.</li>
<li>Once changes are complete the specification can be used to update the documentation used by customers.</li>
</ul>
<p>The idea of documenting software isn&rsquo;t new, this has been practiced since people started writing software. I am personally enjoying this renaissance of documentation as everyone wins, from the developers who are writing the code to the customers who are using it.</p>
<h2 id="conclusion">Conclusion</h2>
<p>This is a new way of working. It won&rsquo;t be perfect, especially while you&rsquo;re figuring out how to work with the AI agent. But it is an opportunity to improve your productivity by embracing this new paradigm.</p>
<p>Key takeaways:</p>
<ul>
<li>Embrace specification driven development, as this is the foundation of good software development.</li>
<li>Ensure specifications are reviewed and questioned before the AI agent starts work, this avoids wasting time reworking, or removing pointless changes.</li>
<li>Be collaborative with your AI agent, ask questions, sweat the details and be patient as you learn to build up your intuition and confidence with these tools.</li>
</ul>
<p>One big thing to understand is AI Agents are especially valuable when tackling tasks you aren&rsquo;t familiar with. Tell the agent up front your goal is to solve a problem, and learn how it works, this will help the agent clearly understand the goals and provide the best possible outcomes.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://gist.github.com/wolfeidau/0be9b3b56ebca452375404baddf33777">A recent side project specification written with Claude Code</a></li>
<li><a href="https://www.anthropic.com/engineering/claude-code-best-practices">Claude Code: Best practices for agentic coding</a></li>
<li><a href="https://brooker.co.za/blog/2025/12/16/natural-language.html">Marc Brooker: On the success of ‘natural language programming’</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Why Connect RPC is a great choice for building APIs</title>
      <link>https://www.wolfe.id.au/2025/12/02/why-connect-rpc-is-a-great-choice-for-building-apis/</link>
      <pubDate>Tue, 02 Dec 2025 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2025/12/02/why-connect-rpc-is-a-great-choice-for-building-apis/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://connectrpc.com/&#34;&gt;Connect RPC&lt;/a&gt; is a suite of libraries which enable you to build HTTP based APIs which are gRPC compatible. It provides a bridge between &lt;a href=&#34;https://grpc.io/&#34;&gt;gRPC&lt;/a&gt; and HTTP/1.1, letting you leverage HTTP/2&amp;rsquo;s multiplexing and performance benefits while still supporting HTTP/1.1 clients. This makes it a great solution for teams looking to get the performance benefits of gRPC, while maintaining broad client compatibility.&lt;/p&gt;
&lt;p&gt;HTTP/2&amp;rsquo;s multiplexing and binary framing make it significantly more efficient than HTTP/1.1, reducing latency and improving throughput. Connect RPC lets you harness these benefits while maintaining broad client compatibility for services that can&amp;rsquo;t yet support HTTP/2.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://connectrpc.com/">Connect RPC</a> is a suite of libraries which enable you to build HTTP based APIs which are gRPC compatible. It provides a bridge between <a href="https://grpc.io/">gRPC</a> and HTTP/1.1, letting you leverage HTTP/2&rsquo;s multiplexing and performance benefits while still supporting HTTP/1.1 clients. This makes it a great solution for teams looking to get the performance benefits of gRPC, while maintaining broad client compatibility.</p>
<p>HTTP/2&rsquo;s multiplexing and binary framing make it significantly more efficient than HTTP/1.1, reducing latency and improving throughput. Connect RPC lets you harness these benefits while maintaining broad client compatibility for services that can&rsquo;t yet support HTTP/2.</p>
<p>Connect RPC can be used to build both internal and external APIs, powering frontends, mobile apps, CLIs, agents and more. See the list of <a href="https://github.com/connectrpc">supported languages</a>.</p>
<h2 id="core-features">Core Features</h2>
<p>Connect RPC provides a number of features out of the box, such as:</p>
<ul>
<li><a href="https://connectrpc.com/docs/go/interceptors">Interceptors</a> which make it easy to extend Connect RPC and are used to add authentication, logging, metrics, tracing and retries.</li>
<li><a href="https://connectrpc.com/docs/go/serialization-and-compression">Serialization &amp; compression</a>, with pluggable serializers, and support for asymmetric compression reducing the amount of data that needs to be transmitted, or received.</li>
<li><a href="https://connectrpc.com/docs/go/errors">Error handling</a>, with a standard error format, with support for custom error codes to allow for more granular error handling.</li>
<li><a href="https://connectrpc.com/docs/go/observability">Observability</a>, with in built support for OpenTelemetry enabling you to easily add tracing, or metrics to your APIs.</li>
<li><a href="https://connectrpc.com/docs/go/streaming">Streaming</a>, which provides a very efficient way to push or pull data without polling.</li>
<li><a href="https://connectrpc.com/docs/protocol/#summary">Schemas</a>, which enable you to define and validate your API schemas, and generate code from them.</li>
<li><a href="https://connectrpc.com/docs/web/generating-code/#local-generation">Code generation</a> for <a href="https://go.dev">Go</a>, <a href="https://www.typescriptlang.org/">TypeScript</a>, <a href="https://kotlinlang.org/">Kotlin</a>, <a href="https://developer.apple.com/swift/">Swift</a> and <a href="https://www.java.com/en/">Java</a>.</li>
</ul>
<h2 id="ecosystem">Ecosystem</h2>
<p>In addition to these features, Connect RPC is built on top of the Buf ecosystem, which offers notable benefits:</p>
<ul>
<li><a href="https://buf.build/blog/connect-rpc-joins-cncf">Connect RPC joins CNCF</a>, entering the cloud-native ecosystem, which is great for the long term sustainability of the project.</li>
<li><a href="https://buf.build/product/bsr">Buf Schema Registry</a>, which is a great tool for managing, sharing and versioning your API schemas.</li>
<li><a href="https://buf.build/product/cli">Buf CLI</a>, a handy all in one tool for managing your APIs, generating code and linting.</li>
</ul>
<h2 id="recommended-interceptor-packages">Recommended Interceptor Packages</h2>
<p>Some handy Go packages that provide pre-built Connect RPC interceptors worth exploring or using as a starting point:</p>
<ul>
<li><a href="https://github.com/connectrpc/authn-go">authn-go</a>, provides a rebuilt authentication middleware library for Go. It works with any authentication scheme (including HTTP basic authentication, cookies, bearer tokens, and mutual TLS).</li>
<li><a href="https://github.com/connectrpc/validate-go">validate-go</a> provides a Connect RPC interceptor that takes the tedium out of data validation. This package is powered by <a href="https://github.com/bufbuild/protovalidate-go">protovalidate</a>
and the <a href="https://github.com/google/cel-spec">Common Expression Language</a>.</li>
<li><a href="https://github.com/mdigger/rpclog">rpclog</a> provides a structured logging interceptor for Connect RPC with support for both unary and streaming RPCs.</li>
</ul>
<h2 id="summary">Summary</h2>
<ol>
<li>
<p>Connect RPC provides a paved and well maintained path to building gRPC compatible APIs, while maintaining compatibility for HTTP/1.1 clients. This is invaluable for product teams that need to support multiple client types without building custom compatibility layers.</p>
</li>
<li>
<p>Using a mature library like Connect RPC, you get to benefit from all the prebuilt integrations, and the added capabilities of the Buf ecosystem. This makes publishing and consuming APIs a breeze.</p>
</li>
<li>
<p>Protobuf schemas, high performance serialisation and compression ensure you get robust and efficient APIs.</p>
</li>
</ol>
<h2 id="conclusion">Conclusion</h2>
<p>Connect RPC makes it easy to build high-performance, robust APIs with gRPC compatibility, while avoiding the complexity of building and maintaining custom compatibility layers.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Why OIDC?</title>
      <link>https://www.wolfe.id.au/2025/11/16/why-oidc/</link>
      <pubDate>Sun, 16 Nov 2025 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2025/11/16/why-oidc/</guid>
      <description>&lt;p&gt;Over the last few years there has been a push away from using machine identity for continuous integration (CI) agents, or runners, and instead use a more targeted, least privileged approach to authentication and authorization. This is where &lt;a href=&#34;https://openid.net/developers/how-connect-works/&#34;&gt;OIDC (OpenID Connect)&lt;/a&gt; comes in, which is a method of authentication used to bridge between the CI provider and cloud services such as AWS, Azure, and Google Cloud.&lt;/p&gt;
&lt;p&gt;In this model the CI provider acts as an identity provider, issuing tokens to the CI runner/agent which include a set of claims identifying the owner, pipeline, workflow and job that is being executed. This is then used to authenticate with the cloud service, and access the resources that the pipeline, workflow and job require.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the last few years there has been a push away from using machine identity for continuous integration (CI) agents, or runners, and instead use a more targeted, least privileged approach to authentication and authorization. This is where <a href="https://openid.net/developers/how-connect-works/">OIDC (OpenID Connect)</a> comes in, which is a method of authentication used to bridge between the CI provider and cloud services such as AWS, Azure, and Google Cloud.</p>
<p>In this model the CI provider acts as an identity provider, issuing tokens to the CI runner/agent which include a set of claims identifying the owner, pipeline, workflow and job that is being executed. This is then used to authenticate with the cloud service, and access the resources that the pipeline, workflow and job require.</p>
<p>In simple terms, this is a form of trust delegation, where the CI provider is trusted by the cloud service to issue tokens on behalf of the owner, pipeline, workflow and job.</p>
<h2 id="how-oidc-works">How OIDC Works</h2>
<p>The OIDC trust delegation flow is as follows:</p>
<pre class="mermaid">sequenceDiagram
    participant CI as CI Provider&lt;br/&gt;(Identity Provider)
    participant Runner as CI Runner/Agent
    participant Cloud as Cloud Service&lt;br/&gt;(AWS/Azure/GCP)

    Note over CI,Cloud: OIDC Trust Delegation Flow

    CI-&gt;&gt;Runner: Issue OIDC token with claims&lt;br/&gt;(pipeline, workflow, job)
    Runner-&gt;&gt;Cloud: Request access with OIDC token
    Cloud-&gt;&gt;Cloud: Verify token signature&lt;br/&gt;and validate claims
    Cloud-&gt;&gt;Runner: Grant temporary credentials
    Runner-&gt;&gt;Cloud: Access resources with credentials

    Note over CI,Cloud: Trust established via OIDC configuration
</pre>
<p>There are a few things to note:</p>
<ul>
<li>When using OIDC, the runner doesn&rsquo;t need to be registered with the cloud service; it is granted access via the OIDC token.</li>
<li>The OIDC token is cryptographically signed by the CI provider, and the cloud service verifies the signature to ensure the token is valid.</li>
<li>In this model all three parties (CI provider, runner, and cloud service) are trusted to issue and verify tokens.</li>
</ul>
<h2 id="limiting-cloud-access-to-the-agentrunner">Limiting Cloud Access to the Agent/Runner</h2>
<p>To ensure the CI provider can&rsquo;t access the cloud service directly, you can add conditions which ensure only the runner/agent is allowed to access the cloud resources.</p>
<p>On top of this cloud providers such as AWS have conditions which can <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-network-properties">restrict access to a specific AWS network resource, such as a VPC</a>. I recommend familiarizing yourself with the documentation for your cloud provider to understand how to lock down access to the runner/agent.</p>
<h2 id="benefits-of-oidc">Benefits of OIDC</h2>
<p>The reasons this is useful are:</p>
<ul>
<li>It provides a more secure and flexible approach to authentication and authorization</li>
<li>It limits the scope of the token to the specific pipeline, workflow, and job</li>
<li>It is tied to the lifecycle of the pipeline, workflow, and job, which means the token is limited to the duration of that execution</li>
<li>It is more flexible than using machine identity for CI runners/agents as it allows for more granular control over the permissions granted to the runner/agent</li>
</ul>
<h2 id="ephemeral-runnersagents">Ephemeral Runners/Agents</h2>
<p>Ephemeral runners/agents are short lived, single job, or single workflow runners/agents which are created before the workflow, or job is started. They provide a more secure and flexible approach to job execution as there is no need to worry about these environments being tainted by previous jobs or workflows.</p>
<p>When paired with OIDC these environments provide an extra layer of security as they are destroyed after the job or workflow is complete, further reducing the risk of cross job or workflow access.</p>
<h2 id="summary">Summary</h2>
<p>So in summary, OIDC provides a more secure and flexible approach to access management for CI projects, and it is particularly useful when paired with ephemeral runners/agents.</p>
<p>The biggest advantage of this approach is that it allows engineers to focus on the access required by the pipeline, workflow, and job, rather than having to manage machine identities and permissions for each runner/agent.</p>
<p>One of the interesting things about this approach is that you&rsquo;re not limited to using OIDC just with cloud providers; you can use it with your own services as well. By using OIDC libraries such as <a href="https://github.com/coreos/go-oidc">github.com/coreos/go-oidc</a>, you can implement APIs which can use the identity of CI pipelines, workflows, and jobs. An example of this is <a href="https://www.hashicorp.com/en/resources/using-oidc-with-hashicorp-vault-and-github-actions">Using OIDC With HashiCorp Vault and GitHub Actions</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://buildkite.com/docs/pipelines/security/oidc">OIDC for Buildkite</a></li>
<li><a href="https://docs.github.com/en/enterprise-cloud@latest/actions/concepts/security/openid-connect">OIDC for GitHub Actions</a></li>
<li><a href="https://docs.gitlab.com/integration/openid_connect_provider/">OIDC for GitLab</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Using a Monorepo to publish Lean Go Packages with Workspaces</title>
      <link>https://www.wolfe.id.au/2023/12/28/using-a-monorepo-to-publish-lean-go-packages-with-workspaces/</link>
      <pubDate>Thu, 28 Dec 2023 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/12/28/using-a-monorepo-to-publish-lean-go-packages-with-workspaces/</guid>
      <description>&lt;p&gt;As a developer who works with Go in my day-to-day development, I constantly struggle with third party packages or tools which bring in a lot of dependencies. This is especially true when you&amp;rsquo;re trying to keep your project dependencies up to date, while &lt;a href=&#34;https://github.com/dependabot&#34;&gt;dependabot&lt;/a&gt;, and other security software, is screaming about vulnerabilities in dependencies of dependencies.&lt;/p&gt;
&lt;p&gt;This is especially a problem with two common packages I use:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Any HTTP adaptor package, which ships with integrations for multiple server packages, such as Gin, Echo, and others.&lt;/li&gt;
&lt;li&gt;Any package which uses docker to test with containers.&lt;/li&gt;
&lt;li&gt;Projects which include examples with their own dependencies.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To break this cycle in my own projects, and packages I publish privately in work projects, I have adopted the use of &lt;a href=&#34;https://go.dev/ref/mod#workspaces&#34;&gt;Go workspaces&lt;/a&gt;, which allows me to create a monorepo broken up into multiple packages, and then publish one or more of these packages.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As a developer who works with Go in my day-to-day development, I constantly struggle with third party packages or tools which bring in a lot of dependencies. This is especially true when you&rsquo;re trying to keep your project dependencies up to date, while <a href="https://github.com/dependabot">dependabot</a>, and other security software, is screaming about vulnerabilities in dependencies of dependencies.</p>
<p>This is especially a problem with two common packages I use:</p>
<ol>
<li>Any HTTP adaptor package, which ships with integrations for multiple server packages, such as Gin, Echo, and others.</li>
<li>Any package which uses docker to test with containers.</li>
<li>Projects which include examples with their own dependencies.</li>
</ol>
<p>To break this cycle in my own projects, and packages I publish privately in work projects, I have adopted the use of <a href="https://go.dev/ref/mod#workspaces">Go workspaces</a>, which allows me to create a monorepo broken up into multiple packages, and then publish one or more of these packages.</p>
<p>So to understand how this helps, let&rsquo;s provide an example. I have a project called <a href="https://github.com/wolfeidau/s3iofs">s3iofs</a> which provides an s3 based <a href="https://pkg.go.dev/io/fs">io/fs</a> adaptor, and within this project I have integrations tests which use docker and <a href="https://min.io/">minio</a> server to test it.</p>
<p>Before I started using workspaces, if you added this package to your project you would have the docker client added to your dependencies, which in turn would add its dependencies, resulting in a lot of bloat in your project.</p>
<p>This is best illustrated by the following dependency count, which is from my <code>github.com/wolfeidau/s3iofs</code> package.</p>
<pre tabindex="0"><code>cat go.sum| wc -l
65
</code></pre><p>By comparison my <code>github.com/wolfeidau/s3iofs/integration</code> package has the following dependency count.</p>
<pre tabindex="0"><code>cat go.sum| wc -l
185
</code></pre><p>This is a rather simplistic comparison, but you can see that the integration tests have a lot more dependencies.</p>
<p>Because I have isolated the docker based integration tests in their own package, within this workspace, I can develop away happily, not needing to micromanage these modules, while you as the consumer of my package get a lean secure package.</p>
<h2 id="how-to-use-workspaces">How to use workspaces</h2>
<p>To get started with workspaces I recommend a couple of tutorials, the <a href="https://go.dev/doc/tutorial/workspaces">Getting started with multi-module workspaces</a>, then.</p>
<p>Once you have read through the getting started guide, you can publish your packages with the following commands.</p>
<ul>
<li>First we should initialise our go project, this is done from an empty folder with the same name as the project, this will create a go.mod file.</li>
</ul>
<pre tabindex="0"><code>mkdir s3backend
cd s3backend
go mod init github.com/wolfeidau/s3backend
</code></pre><ul>
<li>Now once we have written some code and added some dependencies, we can set up some integration tests, to do this we will initialise another go project in a subfolder called <code>integration</code>. In the case of <code>s3iofs</code> the only code in this folder are test files.</li>
</ul>
<pre tabindex="0"><code>mkdir integration
cd integration
go mod init github.com/wolfeidau/s3backend/integration
</code></pre><ul>
<li>Now we can add initialise our workspace and add the two packages, being our library in the root, and the integration tests.</li>
</ul>
<pre tabindex="0"><code>go work init
go work use .
go work use ./integration
</code></pre><ul>
<li>Now we can run the tests in the integration folder, note I have included</li>
</ul>
<pre tabindex="0"><code>cd integration
go test -covermode=atomic -coverpkg=github.com/wolfeidau/s3iofs -v ./...
</code></pre><ul>
<li>This will provide the test results as follows, note how I am able to provide test coverage across module boundaries using the <code>-coverpkg</code> flag which was introduced in Go 1.20 and is explained in <a href="https://go.dev/doc/build-cover">Coverage profiling support for integration tests</a>.</li>
</ul>
<pre tabindex="0"><code>PASS
coverage: 70.2% of statements in github.com/wolfeidau/s3iofs
2023/12/28 12:58:43 code 0
ok  	github.com/wolfeidau/s3iofs/integration	1.225s	coverage: 70.2% of statements in github.com/wolfeidau/s3iofs
</code></pre><p>If you need this <code>-coverpkg</code> option to work in vscode, you will need to add the following to your <code>.vscode/settings.json</code> file in your project.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;go.testFlags&#34;</span><span class="p">:</span> <span class="p">[</span>
</span></span><span class="line"><span class="cl">        <span class="s2">&#34;-v&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="s2">&#34;-coverpkg=github.com/wolfeidau/s3iofs&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>This is just one use case for using workspaces in a monorepo, but it is a very useful tool for managing dependencies you use in your project, and how you keep what you provide to others as lean and secure as possible.</p>
<p>I recommend you clone the <a href="https://github.com/wolfeidau/s3iofs">s3iofs</a> and dig into how it works locally, open it in your editor of choice and run the tests, then try it out in your own project.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Getting started with AI for developers</title>
      <link>https://www.wolfe.id.au/2023/12/16/getting-started-with-ai-for-developers/</link>
      <pubDate>Sat, 16 Dec 2023 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/12/16/getting-started-with-ai-for-developers/</guid>
      <description>&lt;p&gt;As a software developer, I have seen a lot of changes over the years, however few have been as drastic as the rise of artificial intelligence. There are a growing list of tools and services using this technology to help developers with day to day tasks, and speed up their work, however few of these tools help them understand how this technology works, and what it can do. So I wanted to share some of my own tips on how to get started with AI.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As a software developer, I have seen a lot of changes over the years, however few have been as drastic as the rise of artificial intelligence. There are a growing list of tools and services using this technology to help developers with day to day tasks, and speed up their work, however few of these tools help them understand how this technology works, and what it can do. So I wanted to share some of my own tips on how to get started with AI.</p>
<p>The aim of this exercise is to help develop some intuition of how AI works, and how it can be used to help in your day-to-day tasks, while hopefully discovering ways to use it in future applications you build.</p>
<h2 id="getting-started">Getting Started</h2>
<p>As the common saying that originated from a Chinese proverb says.</p>
<blockquote>
<p>A journey of a thousand miles begins with a single step.</p>
</blockquote>
<p>To kick off your understanding of AI I recommend you select a coding assistant and start using it on your personal, or side projects, this will provide you with a better understanding of how it succeeds, and sometimes fails. Building this knowledge up will help you develop an understanding of strengths and weaknesses as a user.</p>
<p>I personally recommend getting started with <a href="https://about.sourcegraph.com/cody">Cody</a> as it is a great tool, and is free for personal use, while also being open source itself. The developers of Cody are very open and helpful, and have a great community of users, while also sharing their own experiences while building the tool.</p>
<p>Cody is more than just a code completion tool, you can ask it questions and get it to summarise and document your code, and even generate test cases. Make sure you explore all the options, again to build up more knowledge of how these AI tools work.</p>
<p>And most importantly be curios, and explore every corner of the tool.</p>
<h2 id="diving-into-llms">Diving Into LLMs</h2>
<p>Next, I recommend you start experimenting with some of the open source large language models (LLMs) using tools such as <a href="https://ollama.ai/">ollama</a> to allow you to download, run and experiment with the software. To get started with this tool, you can follow the quick start in the <code>README.md</code> hosted at <a href="https://github.com/jmorganca/ollama">https://github.com/jmorganca/ollama</a>. Also there is a great intro by Sam Witteveen <a href="https://www.youtube.com/watch?v=Ox8hhpgrUi0&amp;t=2s">Ollama - Local Models on your machine</a> which I highly recommend.</p>
<h2 id="what-is-a-large-language-model">What is a large language model?</h2>
<p>Here is a quote from <a href="https://en.wikipedia.org/wiki/Large_language_model">wikipedia on what a large language model</a> is:</p>
<blockquote>
<p>A large language model (LLM) is a large scale language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by using massive amounts of data to learn billions of parameters during training and consuming large computational resources during their training and operation. LLMs are artificial neural networks (mainly <a href="https://en.wikipedia.org/wiki/Transformer_%28machine_learning_model%29">transformers</a> and are (pre)trained using self-supervised learning and semi-supervised learning.</p>
</blockquote>
<h2 id="why-open-llms">Why Open LLMs?</h2>
<p>I prefer to learn from the open LLMs for the following reasons:</p>
<ol>
<li>They have a great community of developers and users, who share information about the latest developments.</li>
<li>You get a broader range of models, and can try them out and see what they do.</li>
<li>You can run them locally with your data, and see what they do without some of the privacy concerns of cloud based services.</li>
<li>You have the potential to fine tune them to your data, and improve the performance.</li>
</ol>
<p>I keep up with the latest developments I use <a href="https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard">hugging face open llm leader board</a>, as they have been doing a lot of work on large language models, and have a great community of users. When the latest models are posted they are also sharing their experiences, and fine tuned versions via their <a href="https://huggingface.co/blog">blog</a>, which is great resource. Notable models are normally added to ollama after a day or so, so you can try them out and see what they do.</p>
<p>There are a number of different types of LLMs, each with their own strengths and weaknesses. I personally like to experiment with the chat bot models, as they are very simple to use, and are easy to interface with via olloma. An example of one of these from the hugging face site is <a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">https://huggingface.co/mistralai/Mistral-7B-v0.1</a> which is a chat bot model trained by the <a href="https://mistral.ai/">Mistral AI</a> team.</p>
<p>To get started with this model you can follow the instructions at <a href="https://ollama.ai/library/mistral">https://ollama.ai/library/mistral</a>, download and run the model locally.</p>
<h2 id="pick-a-scenario-to-test">Pick a Scenario To Test</h2>
<p>My scenario relates to my current role, and covers questions which my team encounters on a day-to-day basis. As a team we are providing advice to a customers about how to improve the operational readiness and security posture for internally developed applications. This is a common scenario for many companies, where applications are developed to provide a proof of concept, and are then deployed to a production environment without the supporting processes in place.</p>
<p>What is approach is helpful as:</p>
<ol>
<li>This is a scenario I can relate to, and can use my existing knowledge to review the results.</li>
<li>This is a scenario which is not too complex, and can be used to demonstrate the concepts.</li>
<li>This is a scenario which will provide me value while I am learning how to use the tools.</li>
</ol>
<h2 id="building-a-list-of-questions">Building a list of questions</h2>
<p>Once you have a scenario, you can draft a list of questions which you can start testing them with models, this will help you understand how the models work, and how they can be to support a team or business unit, while also learning how to use them.</p>
<p>The questions I am currently using mainly focus on DevOps, and SRE processes, paired with a dash of <a href="https://aws.amazon.com/">AWS</a> security and terraform questions.</p>
<h3 id="i-need-to-create-a-secure-environment-in-and-aws-account-where-should-i-start">I need to create a secure environment in and AWS Account, where should I start?</h3>
<p>This question is really common for developers starting out in AWS, it is quite broad and I am mostly expecting a high level overview of how to create a secure environment, and how to get started.</p>
<h3 id="how-would-i-create-an-encrypted-secure-s3-bucket-using-terraform">How would I create an encrypted secure s3 bucket using terraform?</h3>
<p>This question is a bit more specific, focusing on a single AWS service, while also adding a few specific requirements. Models like Mistral will provide a step by step guide on how to achieve this, while others will provide the terraform code to achieve this.</p>
<h3 id="i-need-to-create-an-application-risk-management-program-where-should-i-start">I need to create an Application Risk Management Program, where should I start?</h3>
<p>This question is quite common if your working in a company which doesn&rsquo;t have a long history with internal software development, or a team that is trying to ensure they cover the risks of their applications.</p>
<h3 id="what-is-a-good-sre-incident-process-for-a-business-application">What is a good SRE incident process for a business application?</h3>
<p>This question is also quite broad, but includes Site Reliability Engineering (SRE) as a keyword, so I am expecting an answer which aligns with the principals of this movement.</p>
<h2 id="what-is-a-good-checklist-for-a-serverless-developer-who-wants-to-improve-the-monitoring-of-their-applications">What is a good checklist for a serverless developer who wants to improve the monitoring of their applications?</h2>
<p>This is a common question asked by people who are just getting started with serverless and are interested in, or have been asked to improve the monitoring of their applications.</p>
<h2 id="whats-next">Whats Next?</h2>
<p>So now that you have a scenario and a few questions I recommend you do the following:</p>
<ol>
<li>Try a couple of other models, probably <a href="https://ollama.ai/library/llama2">llama2</a> and <a href="https://ollama.ai/library/orca2">orca</a> are a good starting point.</li>
<li>Learn a bit about prompting by following <a href="https://replicate.com/blog/how-to-prompt-llama">A guide to prompting Llama 2</a> from the replicate blog.</li>
<li>Apply the prompts to your ollama model using a <a href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md">modelfile</a>, which is similar to a <a href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a>.</li>
<li>Try out an uncensored model, something like <a href="https://ollama.ai/library/llama2-uncensored">llama2-uncensored</a> and run through your questions, then ask about breaking into cars or killing processes, which can be a problematic question in some censored models. It is good to understand what censoring a model does, as it can be a useful tool for understanding the risks of using a model.</li>
<li>Start reading more about <a href="https://github.com/premAI-io/state-of-open-source-ai">The State of Open Source AI (2023 Edition)</a>.</li>
</ol>
<h2 id="further-research">Further Research</h2>
<p>Now that you are dabbling with LLMs, and AI, I recommend you try these models for the odd question in your day-to-day work, the local ones running in ollama are restively safe, and they can save you a lot of work.</p>
<p>Also try similar questions with services such as <a href="https://chat.openai.com/">https://chat.openai.com/</a>, hosted services are a powerful tool for adhoc testing and learning. Just be aware of data privacy, and security when using these services.</p>
<p>Once you have some experience you will hopefully even incorporate a model into work projects such as data cleansing, summarizations, or processing of user feedback to help you improve your applications. For this you can use services such as <a href="https://aws.amazon.com/bedrock/">AWS Bedrock</a> on AWS, or <a href="https://cloud.google.com/generative-ai-studio">Generative AI Studio</a> on Google cloud, while following the same methodology to evaluate and select a model for your use case.</p>
<p>If your intrigued and want to go even deeper than these APIs, I recommend you dive into some of the amazing resources on the web for learning how AI and LLMs work, and possibly even develop, or fine tune your own own models.</p>
<ul>
<li><a href="https://www.fast.ai/">fast.ia</a> which provides some great online self paced learning on AI.</li>
<li><a href="https://www.youtube.com/watch?v=zjkBMFhNj_g">A busy persons intro to LLMs</a> great lecture on LLMs.</li>
<li><a href="http://introtodeeplearning.com/">MIT Introduction to Deep Learning</a> for those who want to dive deeper and prefer more of a structured course.</li>
<li><a href="https://www.youtube.com/watch?v=jkrNMKz9pWU">A Hackers&rsquo; Guide to Language Models</a>, another great talk by Jeremy Howard of fast.ai.</li>
</ul>
]]></content:encoded>
    </item>
    
  </channel>
</rss>
