<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Mark Wolfe&#39;s Blog</title>
    <link>https://www.wolfe.id.au/</link>
    <description>Recent content on Mark Wolfe&#39;s Blog</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Sun, 07 Dec 2025 08:55:22 +1000</lastBuildDate><atom:link href="https://www.wolfe.id.au/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>How I Work with AI Coding Agents</title>
      <link>https://www.wolfe.id.au/2025/12/07/how-i-work-with-ai-coding-agents/</link>
      <pubDate>Sun, 07 Dec 2025 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2025/12/07/how-i-work-with-ai-coding-agents/</guid>
      <description>&lt;p&gt;For anyone who has been following AI and software development, things are changing rapidly, this includes how we build software.&lt;/p&gt;
&lt;p&gt;Over the last few months, I have found myself going from working alone to working with an AI agent, such as &lt;a href=&#34;https://claude.ai/&#34;&gt;Anthropic&amp;rsquo;s Claude&lt;/a&gt;, &lt;a href=&#34;https://openai.com/codex/&#34;&gt;OpenAI&amp;rsquo;s Codex&lt;/a&gt;, &lt;a href=&#34;https://ampcode.com/&#34;&gt;amp&lt;/a&gt; or &lt;a href=&#34;https://geminicli.com/&#34;&gt;Google&amp;rsquo;s Gemini CLI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This change has been both exciting and challenging. With the help of this AI agent, I have been able to delegate tasks and focus on the most important things.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For anyone who has been following AI and software development, things are changing rapidly, this includes how we build software.</p>
<p>Over the last few months, I have found myself going from working alone to working with an AI agent, such as <a href="https://claude.ai/">Anthropic&rsquo;s Claude</a>, <a href="https://openai.com/codex/">OpenAI&rsquo;s Codex</a>, <a href="https://ampcode.com/">amp</a> or <a href="https://geminicli.com/">Google&rsquo;s Gemini CLI</a>.</p>
<p>This change has been both exciting and challenging. With the help of this AI agent, I have been able to delegate tasks and focus on the most important things.</p>
<p>With a shift in mindset, I&rsquo;ve been able to delegate tasks more effectively and get predictable outcomes.</p>
<h2 id="the-death-of-the-chat-box">The death of the Chat Box</h2>
<p>Over the last few months, I have found myself moving away from transactional interactions with AI agents via a chat box, to a more collaborative approach. Instead of asking the AI agent to fix an issue and then reviewing the results, I am now working in a more iterative way. This has led me to follow a more <a href="https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html">specification driven development process</a> which is a great way to ensure more predictable and reliable results.</p>
<p>This process looks like:</p>
<ol>
<li>Provide details of the problem, feature, or bug, then work with the agent to put together a plan.</li>
<li>Review the plan, remove any unnecessary steps and focus on the most important ones. I then ask the agent to export the plan to a specification in a <a href="https://daringfireball.net/projects/markdown/">markdown</a> file in the codebase.</li>
<li>I then clear the context (<code>/clear</code> in claude code) and get the agent to review the specification and provide feedback. This typically highlights a few areas that need to be addressed.</li>
<li>If the specification looks good, and I am clear on the outcomes, then I instruct the agent to start work on the specification, this typically follows one or more phases.</li>
<li>I then do some testing, review the results and provide feedback.</li>
<li>I clear the context and get the agent to review the specification and the outcomes, then we update it with the results.</li>
<li>Finally, I clear the context and get the agent to review the outcomes and provide feedback, for code this is done using a code review skill or sub agent. Once we have completed this process I can commit the changes to the codebase.</li>
</ol>
<p>This process is especially useful for most tasks, building new features, or refactoring existing ones, but I find a scaled back version of this process is even useful for small tasks, to ensure the agent is kept on track.</p>
<p><strong>NOTE:</strong> During long conversations with the AI Agent it is important to keep the context clear otherwise it will fill up with irrelevant discussions which impacts the performance and distracts the agent.</p>
<h2 id="documentation-is-king">Documentation is King</h2>
<p>Documentation is the backbone of any software project and with the rise of AI, it is becoming even more important. The ability to quickly and easily create a specification, then iterate on it, and use it to drive the development process with an AI agent is key to ensuring work stays on track.</p>
<p>Why is maintaining a specification important?</p>
<ul>
<li>It helps you establish a clear goal and plan for what the AI agent is going to do.</li>
<li>It provides a reference point for other developers and stakeholders.</li>
<li>Once changes are complete the specification can be used to update the documentation used by customers.</li>
</ul>
<p>The idea of documenting software isn&rsquo;t new, this has been practiced since people started writing software. I am personally enjoying this renaissance of documentation as everyone wins, from the developers who are writing the code to the customers who are using it.</p>
<h2 id="conclusion">Conclusion</h2>
<p>This is a new way of working. It won&rsquo;t be perfect, especially while you&rsquo;re figuring out how to work with the AI agent. But it is an opportunity to improve your productivity by embracing this new paradigm.</p>
<p>Key takeaways:</p>
<ul>
<li>Embrace specification driven development, as this is the foundation of good software development.</li>
<li>Ensure specifications are reviewed and questioned before the AI agent starts work, this avoids wasting time reworking, or removing pointless changes.</li>
<li>Be collaborative with your AI agent, ask questions, sweat the details and be patient as you learn to build up your intuition and confidence with these tools.</li>
</ul>
<p>One big thing to understand is AI Agents are especially valuable when tackling tasks you aren&rsquo;t familiar with. Tell the agent up front your goal is to solve a problem, and learn how it works, this will help the agent clearly understand the goals and provide the best possible outcomes.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://gist.github.com/wolfeidau/0be9b3b56ebca452375404baddf33777">A recent side project specification written with Claude Code</a></li>
<li><a href="https://www.anthropic.com/engineering/claude-code-best-practices">Claude Code: Best practices for agentic coding</a></li>
<li><a href="https://brooker.co.za/blog/2025/12/16/natural-language.html">Marc Brooker: On the success of ‘natural language programming’</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Why Connect RPC is a great choice for building APIs</title>
      <link>https://www.wolfe.id.au/2025/12/02/why-connect-rpc-is-a-great-choice-for-building-apis/</link>
      <pubDate>Tue, 02 Dec 2025 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2025/12/02/why-connect-rpc-is-a-great-choice-for-building-apis/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://connectrpc.com/&#34;&gt;Connect RPC&lt;/a&gt; is a suite of libraries which enable you to build HTTP based APIs which are gRPC compatible. It provides a bridge between &lt;a href=&#34;https://grpc.io/&#34;&gt;gRPC&lt;/a&gt; and HTTP/1.1, letting you leverage HTTP/2&amp;rsquo;s multiplexing and performance benefits while still supporting HTTP/1.1 clients. This makes it a great solution for teams looking to get the performance benefits of gRPC, while maintaining broad client compatibility.&lt;/p&gt;
&lt;p&gt;HTTP/2&amp;rsquo;s multiplexing and binary framing make it significantly more efficient than HTTP/1.1, reducing latency and improving throughput. Connect RPC lets you harness these benefits while maintaining broad client compatibility for services that can&amp;rsquo;t yet support HTTP/2.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://connectrpc.com/">Connect RPC</a> is a suite of libraries which enable you to build HTTP based APIs which are gRPC compatible. It provides a bridge between <a href="https://grpc.io/">gRPC</a> and HTTP/1.1, letting you leverage HTTP/2&rsquo;s multiplexing and performance benefits while still supporting HTTP/1.1 clients. This makes it a great solution for teams looking to get the performance benefits of gRPC, while maintaining broad client compatibility.</p>
<p>HTTP/2&rsquo;s multiplexing and binary framing make it significantly more efficient than HTTP/1.1, reducing latency and improving throughput. Connect RPC lets you harness these benefits while maintaining broad client compatibility for services that can&rsquo;t yet support HTTP/2.</p>
<p>Connect RPC can be used to build both internal and external APIs, powering frontends, mobile apps, CLIs, agents and more. See the list of <a href="https://github.com/connectrpc">supported languages</a>.</p>
<h2 id="core-features">Core Features</h2>
<p>Connect RPC provides a number of features out of the box, such as:</p>
<ul>
<li><a href="https://connectrpc.com/docs/go/interceptors">Interceptors</a> which make it easy to extend Connect RPC and are used to add authentication, logging, metrics, tracing and retries.</li>
<li><a href="https://connectrpc.com/docs/go/serialization-and-compression">Serialization &amp; compression</a>, with pluggable serializers, and support for asymmetric compression reducing the amount of data that needs to be transmitted, or received.</li>
<li><a href="https://connectrpc.com/docs/go/errors">Error handling</a>, with a standard error format, with support for custom error codes to allow for more granular error handling.</li>
<li><a href="https://connectrpc.com/docs/go/observability">Observability</a>, with in built support for OpenTelemetry enabling you to easily add tracing, or metrics to your APIs.</li>
<li><a href="https://connectrpc.com/docs/go/streaming">Streaming</a>, which provides a very efficient way to push or pull data without polling.</li>
<li><a href="https://connectrpc.com/docs/protocol/#summary">Schemas</a>, which enable you to define and validate your API schemas, and generate code from them.</li>
<li><a href="https://connectrpc.com/docs/web/generating-code/#local-generation">Code generation</a> for <a href="https://go.dev">Go</a>, <a href="https://www.typescriptlang.org/">TypeScript</a>, <a href="https://kotlinlang.org/">Kotlin</a>, <a href="https://developer.apple.com/swift/">Swift</a> and <a href="https://www.java.com/en/">Java</a>.</li>
</ul>
<h2 id="ecosystem">Ecosystem</h2>
<p>In addition to these features, Connect RPC is built on top of the Buf ecosystem, which offers notable benefits:</p>
<ul>
<li><a href="https://buf.build/blog/connect-rpc-joins-cncf">Connect RPC joins CNCF</a>, entering the cloud-native ecosystem, which is great for the long term sustainability of the project.</li>
<li><a href="https://buf.build/product/bsr">Buf Schema Registry</a>, which is a great tool for managing, sharing and versioning your API schemas.</li>
<li><a href="https://buf.build/product/cli">Buf CLI</a>, a handy all in one tool for managing your APIs, generating code and linting.</li>
</ul>
<h2 id="recommended-interceptor-packages">Recommended Interceptor Packages</h2>
<p>Some handy Go packages that provide pre-built Connect RPC interceptors worth exploring or using as a starting point:</p>
<ul>
<li><a href="https://github.com/connectrpc/authn-go">authn-go</a>, provides a rebuilt authentication middleware library for Go. It works with any authentication scheme (including HTTP basic authentication, cookies, bearer tokens, and mutual TLS).</li>
<li><a href="https://github.com/connectrpc/validate-go">validate-go</a> provides a Connect RPC interceptor that takes the tedium out of data validation. This package is powered by <a href="https://github.com/bufbuild/protovalidate-go">protovalidate</a>
and the <a href="https://github.com/google/cel-spec">Common Expression Language</a>.</li>
<li><a href="https://github.com/mdigger/rpclog">rpclog</a> provides a structured logging interceptor for Connect RPC with support for both unary and streaming RPCs.</li>
</ul>
<h2 id="summary">Summary</h2>
<ol>
<li>
<p>Connect RPC provides a paved and well maintained path to building gRPC compatible APIs, while maintaining compatibility for HTTP/1.1 clients. This is invaluable for product teams that need to support multiple client types without building custom compatibility layers.</p>
</li>
<li>
<p>Using a mature library like Connect RPC, you get to benefit from all the prebuilt integrations, and the added capabilities of the Buf ecosystem. This makes publishing and consuming APIs a breeze.</p>
</li>
<li>
<p>Protobuf schemas, high performance serialisation and compression ensure you get robust and efficient APIs.</p>
</li>
</ol>
<h2 id="conclusion">Conclusion</h2>
<p>Connect RPC makes it easy to build high-performance, robust APIs with gRPC compatibility, while avoiding the complexity of building and maintaining custom compatibility layers.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Why OIDC?</title>
      <link>https://www.wolfe.id.au/2025/11/16/why-oidc/</link>
      <pubDate>Sun, 16 Nov 2025 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2025/11/16/why-oidc/</guid>
      <description>&lt;p&gt;Over the last few years there has been a push away from using machine identity for continuous integration (CI) agents, or runners, and instead use a more targeted, least privileged approach to authentication and authorization. This is where &lt;a href=&#34;https://openid.net/developers/how-connect-works/&#34;&gt;OIDC (OpenID Connect)&lt;/a&gt; comes in, which is a method of authentication used to bridge between the CI provider and cloud services such as AWS, Azure, and Google Cloud.&lt;/p&gt;
&lt;p&gt;In this model the CI provider acts as an identity provider, issuing tokens to the CI runner/agent which include a set of claims identifying the owner, pipeline, workflow and job that is being executed. This is then used to authenticate with the cloud service, and access the resources that the pipeline, workflow and job require.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the last few years there has been a push away from using machine identity for continuous integration (CI) agents, or runners, and instead use a more targeted, least privileged approach to authentication and authorization. This is where <a href="https://openid.net/developers/how-connect-works/">OIDC (OpenID Connect)</a> comes in, which is a method of authentication used to bridge between the CI provider and cloud services such as AWS, Azure, and Google Cloud.</p>
<p>In this model the CI provider acts as an identity provider, issuing tokens to the CI runner/agent which include a set of claims identifying the owner, pipeline, workflow and job that is being executed. This is then used to authenticate with the cloud service, and access the resources that the pipeline, workflow and job require.</p>
<p>In simple terms, this is a form of trust delegation, where the CI provider is trusted by the cloud service to issue tokens on behalf of the owner, pipeline, workflow and job.</p>
<h2 id="how-oidc-works">How OIDC Works</h2>
<p>The OIDC trust delegation flow is as follows:</p>
<pre class="mermaid">sequenceDiagram
    participant CI as CI Provider&lt;br/&gt;(Identity Provider)
    participant Runner as CI Runner/Agent
    participant Cloud as Cloud Service&lt;br/&gt;(AWS/Azure/GCP)

    Note over CI,Cloud: OIDC Trust Delegation Flow

    CI-&gt;&gt;Runner: Issue OIDC token with claims&lt;br/&gt;(pipeline, workflow, job)
    Runner-&gt;&gt;Cloud: Request access with OIDC token
    Cloud-&gt;&gt;Cloud: Verify token signature&lt;br/&gt;and validate claims
    Cloud-&gt;&gt;Runner: Grant temporary credentials
    Runner-&gt;&gt;Cloud: Access resources with credentials

    Note over CI,Cloud: Trust established via OIDC configuration
</pre>
<p>There are a few things to note:</p>
<ul>
<li>When using OIDC, the runner doesn&rsquo;t need to be registered with the cloud service; it is granted access via the OIDC token.</li>
<li>The OIDC token is cryptographically signed by the CI provider, and the cloud service verifies the signature to ensure the token is valid.</li>
<li>In this model all three parties (CI provider, runner, and cloud service) are trusted to issue and verify tokens.</li>
</ul>
<h2 id="limiting-cloud-access-to-the-agentrunner">Limiting Cloud Access to the Agent/Runner</h2>
<p>To ensure the CI provider can&rsquo;t access the cloud service directly, you can add conditions which ensure only the runner/agent is allowed to access the cloud resources.</p>
<p>On top of this cloud providers such as AWS have conditions which can <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-network-properties">restrict access to a specific AWS network resource, such as a VPC</a>. I recommend familiarizing yourself with the documentation for your cloud provider to understand how to lock down access to the runner/agent.</p>
<h2 id="benefits-of-oidc">Benefits of OIDC</h2>
<p>The reasons this is useful are:</p>
<ul>
<li>It provides a more secure and flexible approach to authentication and authorization</li>
<li>It limits the scope of the token to the specific pipeline, workflow, and job</li>
<li>It is tied to the lifecycle of the pipeline, workflow, and job, which means the token is limited to the duration of that execution</li>
<li>It is more flexible than using machine identity for CI runners/agents as it allows for more granular control over the permissions granted to the runner/agent</li>
</ul>
<h2 id="ephemeral-runnersagents">Ephemeral Runners/Agents</h2>
<p>Ephemeral runners/agents are short lived, single job, or single workflow runners/agents which are created before the workflow, or job is started. They provide a more secure and flexible approach to job execution as there is no need to worry about these environments being tainted by previous jobs or workflows.</p>
<p>When paired with OIDC these environments provide an extra layer of security as they are destroyed after the job or workflow is complete, further reducing the risk of cross job or workflow access.</p>
<h2 id="summary">Summary</h2>
<p>So in summary, OIDC provides a more secure and flexible approach to access management for CI projects, and it is particularly useful when paired with ephemeral runners/agents.</p>
<p>The biggest advantage of this approach is that it allows engineers to focus on the access required by the pipeline, workflow, and job, rather than having to manage machine identities and permissions for each runner/agent.</p>
<p>One of the interesting things about this approach is that you&rsquo;re not limited to using OIDC just with cloud providers; you can use it with your own services as well. By using OIDC libraries such as <a href="https://github.com/coreos/go-oidc">github.com/coreos/go-oidc</a>, you can implement APIs which can use the identity of CI pipelines, workflows, and jobs. An example of this is <a href="https://www.hashicorp.com/en/resources/using-oidc-with-hashicorp-vault-and-github-actions">Using OIDC With HashiCorp Vault and GitHub Actions</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://buildkite.com/docs/pipelines/security/oidc">OIDC for Buildkite</a></li>
<li><a href="https://docs.github.com/en/enterprise-cloud@latest/actions/concepts/security/openid-connect">OIDC for GitHub Actions</a></li>
<li><a href="https://docs.gitlab.com/integration/openid_connect_provider/">OIDC for GitLab</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>What you should consider when storing datasets in s3</title>
      <link>https://www.wolfe.id.au/2024/04/11/what-you-should-consider-when-storing-datasets-in-s3/</link>
      <pubDate>Thu, 11 Apr 2024 22:07:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2024/04/11/what-you-should-consider-when-storing-datasets-in-s3/</guid>
      <description>&lt;p&gt;As an &lt;a href=&#34;https://aws.amazon.com&#34;&gt;Amazon Web Services (AWS)&lt;/a&gt; developer, I am often asked what is the best way to organise datasets in &lt;a href=&#34;https://aws.amazon.com/s3/&#34;&gt;S3&lt;/a&gt;. A dataset could comprise data exported by business systems, or data emitted by AWS services, such as &lt;a href=&#34;https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html&#34;&gt;CloudFront logs&lt;/a&gt;, or &lt;a href=&#34;https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-working-with-log-files.html&#34;&gt;CloudTrail logs&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Far too often I have seen datasets just dumped into one massive S3 bucket, and left for someone else to tidy up later, however with a little consideration, and empathy for those dealing with this in the future, we can do better than this.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As an <a href="https://aws.amazon.com">Amazon Web Services (AWS)</a> developer, I am often asked what is the best way to organise datasets in <a href="https://aws.amazon.com/s3/">S3</a>. A dataset could comprise data exported by business systems, or data emitted by AWS services, such as <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html">CloudFront logs</a>, or <a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-working-with-log-files.html">CloudTrail logs</a>.</p>
<p>Far too often I have seen datasets just dumped into one massive S3 bucket, and left for someone else to tidy up later, however with a little consideration, and empathy for those dealing with this in the future, we can do better than this.</p>
<h1 id="start-by-asking-a-few-questions">Start By Asking a few questions</h1>
<p>When I am planning to store a dataset in s3 I typically ask a few questions, one thing to note is I am focused on the semantics of the data, and the business, not just the bucket(s) technical configuration at this stage.</p>
<h2 id="what-will-consume-this-information">What will consume this information?</h2>
<p>What I am trying to understand here is whether this dataset has any known consumers, with the AWS logs example, this may be an ingestion tool like <a href="https://www.splunk.com/">Splunk</a>, which is easier to integrate with if there are a few aggregate buckets.</p>
<p>For datasets which are exported from other systems, or transformed for use in an application, or with an integration it may be easier to combine them into one bucket, especially if other factors I cover in the next few questions aren&rsquo;t a concern.</p>
<p>As you will see in my following questions, this is a trade off, and I would also review other points below to determine which is the best approach in the long run.</p>
<h2 id="what-is-the-classification-for-the-data">What is the classification for the data?</h2>
<p>My goal here is to consider the sensitivity of the data, and how it could affect who is granted access.</p>
<p>Keeping sensitive datasets isolated in their own bucket makes it easier to add controls, and simplifies auditing as there is only one top level identifier, i.e., the bucket name.</p>
<p>One thing to avoid is mixing different data classifications in one bucket, as you then need to tag all data in that bucket at the highest classification, which could complicate granting access to the data.</p>
<p>For an example of data classifications, this is a five-tiered commercial data classification approach provided in this book <a href="https://www.pearsonitcertification.com/articles/article.aspx?p=30287&amp;seqNum=9">CISSP Security Management and Practices</a>:</p>
<ul>
<li>Sensitive</li>
<li>Confidential</li>
<li>Private</li>
<li>Proprietary</li>
<li>Public</li>
</ul>
<p>These classifications would be assigned to a tag, such named <code>Classification</code> on your bucket, for more on this see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html">Categorizing your storage using tags</a>.</p>
<p>In general I recommend keeping different classifications of data separated, for example having raw data and anonymised, or cleaned data in the same bucket is <strong>not</strong> a good idea.</p>
<h2 id="what-are-the-cost-implications-for-keeping-this-dataset">What are the cost implications for keeping this dataset?</h2>
<p>The aim with this question is to understand how cost will be managed for a dataset, there are a couple of factors here, the size of the data, and how much churn will occur for the dataset, either in the form of reads or updates to the data.</p>
<p>For datasets which grow quickly, it may be easier to isolate them in their own bucket, as reporting cost for this dataset is easier, and cost control mechanisms such as lifecycle policies, or disabling/enabling versioning simpler to apply.</p>
<p>For more information on optimising storage costs, see <a href="https://aws.amazon.com/s3/cost-optimization/">Optimizing costs for Amazon S3</a>.</p>
<h2 id="what-is-the-likely-growth-of-this-dataset-in-6-to-12-months">What is the likely growth of this dataset in 6 to 12 months?</h2>
<p>This question is related to the previous cost question, but I am trying to understand how challenging the dataset will be to handle over time. External factors such as traffic spikes for log datasets, which are often outside your control, should be taken into consideration as well.</p>
<p>There are two dimensions to this, the size of the data, and the number of objects in the dataset, both can have an impact on how difficult to wrangle the dataset will be in the future, and how much it will cost to move, or backup.</p>
<p>For more information on how to monitor and manage dataset growth in Amazon S3 I recommend digging into <a href="https://aws.amazon.com/s3/storage-lens/">Amazon S3 Storage Lens</a>.</p>
<h1 id="summary">Summary</h1>
<p>As a general rule, I would recommend keeping datasets in separate buckets, with each bucket containing data of a single classification, and ideally a single purpose. This will help to simplify cost control, and make it easier to manage the data in the future.</p>
<p>Getting things right from the start will enable you to make the most of your datasets, which is a potential differentiator for your business in this new era of cloud computing, data engineering and AI.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Using a Monorepo to publish Lean Go Packages with Workspaces</title>
      <link>https://www.wolfe.id.au/2023/12/28/using-a-monorepo-to-publish-lean-go-packages-with-workspaces/</link>
      <pubDate>Thu, 28 Dec 2023 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/12/28/using-a-monorepo-to-publish-lean-go-packages-with-workspaces/</guid>
      <description>&lt;p&gt;As a developer who works with Go in my day-to-day development, I constantly struggle with third party packages or tools which bring in a lot of dependencies. This is especially true when you&amp;rsquo;re trying to keep your project dependencies up to date, while &lt;a href=&#34;https://github.com/dependabot&#34;&gt;dependabot&lt;/a&gt;, and other security software, is screaming about vulnerabilities in dependencies of dependencies.&lt;/p&gt;
&lt;p&gt;This is especially a problem with two common packages I use:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Any HTTP adaptor package, which ships with integrations for multiple server packages, such as Gin, Echo, and others.&lt;/li&gt;
&lt;li&gt;Any package which uses docker to test with containers.&lt;/li&gt;
&lt;li&gt;Projects which include examples with their own dependencies.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To break this cycle in my own projects, and packages I publish privately in work projects, I have adopted the use of &lt;a href=&#34;https://go.dev/ref/mod#workspaces&#34;&gt;Go workspaces&lt;/a&gt;, which allows me to create a monorepo broken up into multiple packages, and then publish one or more of these packages.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As a developer who works with Go in my day-to-day development, I constantly struggle with third party packages or tools which bring in a lot of dependencies. This is especially true when you&rsquo;re trying to keep your project dependencies up to date, while <a href="https://github.com/dependabot">dependabot</a>, and other security software, is screaming about vulnerabilities in dependencies of dependencies.</p>
<p>This is especially a problem with two common packages I use:</p>
<ol>
<li>Any HTTP adaptor package, which ships with integrations for multiple server packages, such as Gin, Echo, and others.</li>
<li>Any package which uses docker to test with containers.</li>
<li>Projects which include examples with their own dependencies.</li>
</ol>
<p>To break this cycle in my own projects, and packages I publish privately in work projects, I have adopted the use of <a href="https://go.dev/ref/mod#workspaces">Go workspaces</a>, which allows me to create a monorepo broken up into multiple packages, and then publish one or more of these packages.</p>
<p>So to understand how this helps, let&rsquo;s provide an example. I have a project called <a href="https://github.com/wolfeidau/s3iofs">s3iofs</a> which provides an s3 based <a href="https://pkg.go.dev/io/fs">io/fs</a> adaptor, and within this project I have integrations tests which use docker and <a href="https://min.io/">minio</a> server to test it.</p>
<p>Before I started using workspaces, if you added this package to your project you would have the docker client added to your dependencies, which in turn would add its dependencies, resulting in a lot of bloat in your project.</p>
<p>This is best illustrated by the following dependency count, which is from my <code>github.com/wolfeidau/s3iofs</code> package.</p>
<pre tabindex="0"><code>cat go.sum| wc -l
65
</code></pre><p>By comparison my <code>github.com/wolfeidau/s3iofs/integration</code> package has the following dependency count.</p>
<pre tabindex="0"><code>cat go.sum| wc -l
185
</code></pre><p>This is a rather simplistic comparison, but you can see that the integration tests have a lot more dependencies.</p>
<p>Because I have isolated the docker based integration tests in their own package, within this workspace, I can develop away happily, not needing to micromanage these modules, while you as the consumer of my package get a lean secure package.</p>
<h2 id="how-to-use-workspaces">How to use workspaces</h2>
<p>To get started with workspaces I recommend a couple of tutorials, the <a href="https://go.dev/doc/tutorial/workspaces">Getting started with multi-module workspaces</a>, then.</p>
<p>Once you have read through the getting started guide, you can publish your packages with the following commands.</p>
<ul>
<li>First we should initialise our go project, this is done from an empty folder with the same name as the project, this will create a go.mod file.</li>
</ul>
<pre tabindex="0"><code>mkdir s3backend
cd s3backend
go mod init github.com/wolfeidau/s3backend
</code></pre><ul>
<li>Now once we have written some code and added some dependencies, we can set up some integration tests, to do this we will initialise another go project in a subfolder called <code>integration</code>. In the case of <code>s3iofs</code> the only code in this folder are test files.</li>
</ul>
<pre tabindex="0"><code>mkdir integration
cd integration
go mod init github.com/wolfeidau/s3backend/integration
</code></pre><ul>
<li>Now we can add initialise our workspace and add the two packages, being our library in the root, and the integration tests.</li>
</ul>
<pre tabindex="0"><code>go work init
go work use .
go work use ./integration
</code></pre><ul>
<li>Now we can run the tests in the integration folder, note I have included</li>
</ul>
<pre tabindex="0"><code>cd integration
go test -covermode=atomic -coverpkg=github.com/wolfeidau/s3iofs -v ./...
</code></pre><ul>
<li>This will provide the test results as follows, note how I am able to provide test coverage across module boundaries using the <code>-coverpkg</code> flag which was introduced in Go 1.20 and is explained in <a href="https://go.dev/doc/build-cover">Coverage profiling support for integration tests</a>.</li>
</ul>
<pre tabindex="0"><code>PASS
coverage: 70.2% of statements in github.com/wolfeidau/s3iofs
2023/12/28 12:58:43 code 0
ok  	github.com/wolfeidau/s3iofs/integration	1.225s	coverage: 70.2% of statements in github.com/wolfeidau/s3iofs
</code></pre><p>If you need this <code>-coverpkg</code> option to work in vscode, you will need to add the following to your <code>.vscode/settings.json</code> file in your project.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;go.testFlags&#34;</span><span class="p">:</span> <span class="p">[</span>
</span></span><span class="line"><span class="cl">        <span class="s2">&#34;-v&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="s2">&#34;-coverpkg=github.com/wolfeidau/s3iofs&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>This is just one use case for using workspaces in a monorepo, but it is a very useful tool for managing dependencies you use in your project, and how you keep what you provide to others as lean and secure as possible.</p>
<p>I recommend you clone the <a href="https://github.com/wolfeidau/s3iofs">s3iofs</a> and dig into how it works locally, open it in your editor of choice and run the tests, then try it out in your own project.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Getting started with AI for developers</title>
      <link>https://www.wolfe.id.au/2023/12/16/getting-started-with-ai-for-developers/</link>
      <pubDate>Sat, 16 Dec 2023 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/12/16/getting-started-with-ai-for-developers/</guid>
      <description>&lt;p&gt;As a software developer, I have seen a lot of changes over the years, however few have been as drastic as the rise of artificial intelligence. There are a growing list of tools and services using this technology to help developers with day to day tasks, and speed up their work, however few of these tools help them understand how this technology works, and what it can do. So I wanted to share some of my own tips on how to get started with AI.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As a software developer, I have seen a lot of changes over the years, however few have been as drastic as the rise of artificial intelligence. There are a growing list of tools and services using this technology to help developers with day to day tasks, and speed up their work, however few of these tools help them understand how this technology works, and what it can do. So I wanted to share some of my own tips on how to get started with AI.</p>
<p>The aim of this exercise is to help develop some intuition of how AI works, and how it can be used to help in your day-to-day tasks, while hopefully discovering ways to use it in future applications you build.</p>
<h2 id="getting-started">Getting Started</h2>
<p>As the common saying that originated from a Chinese proverb says.</p>
<blockquote>
<p>A journey of a thousand miles begins with a single step.</p>
</blockquote>
<p>To kick off your understanding of AI I recommend you select a coding assistant and start using it on your personal, or side projects, this will provide you with a better understanding of how it succeeds, and sometimes fails. Building this knowledge up will help you develop an understanding of strengths and weaknesses as a user.</p>
<p>I personally recommend getting started with <a href="https://about.sourcegraph.com/cody">Cody</a> as it is a great tool, and is free for personal use, while also being open source itself. The developers of Cody are very open and helpful, and have a great community of users, while also sharing their own experiences while building the tool.</p>
<p>Cody is more than just a code completion tool, you can ask it questions and get it to summarise and document your code, and even generate test cases. Make sure you explore all the options, again to build up more knowledge of how these AI tools work.</p>
<p>And most importantly be curios, and explore every corner of the tool.</p>
<h2 id="diving-into-llms">Diving Into LLMs</h2>
<p>Next, I recommend you start experimenting with some of the open source large language models (LLMs) using tools such as <a href="https://ollama.ai/">ollama</a> to allow you to download, run and experiment with the software. To get started with this tool, you can follow the quick start in the <code>README.md</code> hosted at <a href="https://github.com/jmorganca/ollama">https://github.com/jmorganca/ollama</a>. Also there is a great intro by Sam Witteveen <a href="https://www.youtube.com/watch?v=Ox8hhpgrUi0&amp;t=2s">Ollama - Local Models on your machine</a> which I highly recommend.</p>
<h2 id="what-is-a-large-language-model">What is a large language model?</h2>
<p>Here is a quote from <a href="https://en.wikipedia.org/wiki/Large_language_model">wikipedia on what a large language model</a> is:</p>
<blockquote>
<p>A large language model (LLM) is a large scale language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by using massive amounts of data to learn billions of parameters during training and consuming large computational resources during their training and operation. LLMs are artificial neural networks (mainly <a href="https://en.wikipedia.org/wiki/Transformer_%28machine_learning_model%29">transformers</a> and are (pre)trained using self-supervised learning and semi-supervised learning.</p>
</blockquote>
<h2 id="why-open-llms">Why Open LLMs?</h2>
<p>I prefer to learn from the open LLMs for the following reasons:</p>
<ol>
<li>They have a great community of developers and users, who share information about the latest developments.</li>
<li>You get a broader range of models, and can try them out and see what they do.</li>
<li>You can run them locally with your data, and see what they do without some of the privacy concerns of cloud based services.</li>
<li>You have the potential to fine tune them to your data, and improve the performance.</li>
</ol>
<p>I keep up with the latest developments I use <a href="https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard">hugging face open llm leader board</a>, as they have been doing a lot of work on large language models, and have a great community of users. When the latest models are posted they are also sharing their experiences, and fine tuned versions via their <a href="https://huggingface.co/blog">blog</a>, which is great resource. Notable models are normally added to ollama after a day or so, so you can try them out and see what they do.</p>
<p>There are a number of different types of LLMs, each with their own strengths and weaknesses. I personally like to experiment with the chat bot models, as they are very simple to use, and are easy to interface with via olloma. An example of one of these from the hugging face site is <a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">https://huggingface.co/mistralai/Mistral-7B-v0.1</a> which is a chat bot model trained by the <a href="https://mistral.ai/">Mistral AI</a> team.</p>
<p>To get started with this model you can follow the instructions at <a href="https://ollama.ai/library/mistral">https://ollama.ai/library/mistral</a>, download and run the model locally.</p>
<h2 id="pick-a-scenario-to-test">Pick a Scenario To Test</h2>
<p>My scenario relates to my current role, and covers questions which my team encounters on a day-to-day basis. As a team we are providing advice to a customers about how to improve the operational readiness and security posture for internally developed applications. This is a common scenario for many companies, where applications are developed to provide a proof of concept, and are then deployed to a production environment without the supporting processes in place.</p>
<p>What is approach is helpful as:</p>
<ol>
<li>This is a scenario I can relate to, and can use my existing knowledge to review the results.</li>
<li>This is a scenario which is not too complex, and can be used to demonstrate the concepts.</li>
<li>This is a scenario which will provide me value while I am learning how to use the tools.</li>
</ol>
<h2 id="building-a-list-of-questions">Building a list of questions</h2>
<p>Once you have a scenario, you can draft a list of questions which you can start testing them with models, this will help you understand how the models work, and how they can be to support a team or business unit, while also learning how to use them.</p>
<p>The questions I am currently using mainly focus on DevOps, and SRE processes, paired with a dash of <a href="https://aws.amazon.com/">AWS</a> security and terraform questions.</p>
<h3 id="i-need-to-create-a-secure-environment-in-and-aws-account-where-should-i-start">I need to create a secure environment in and AWS Account, where should I start?</h3>
<p>This question is really common for developers starting out in AWS, it is quite broad and I am mostly expecting a high level overview of how to create a secure environment, and how to get started.</p>
<h3 id="how-would-i-create-an-encrypted-secure-s3-bucket-using-terraform">How would I create an encrypted secure s3 bucket using terraform?</h3>
<p>This question is a bit more specific, focusing on a single AWS service, while also adding a few specific requirements. Models like Mistral will provide a step by step guide on how to achieve this, while others will provide the terraform code to achieve this.</p>
<h3 id="i-need-to-create-an-application-risk-management-program-where-should-i-start">I need to create an Application Risk Management Program, where should I start?</h3>
<p>This question is quite common if your working in a company which doesn&rsquo;t have a long history with internal software development, or a team that is trying to ensure they cover the risks of their applications.</p>
<h3 id="what-is-a-good-sre-incident-process-for-a-business-application">What is a good SRE incident process for a business application?</h3>
<p>This question is also quite broad, but includes Site Reliability Engineering (SRE) as a keyword, so I am expecting an answer which aligns with the principals of this movement.</p>
<h2 id="what-is-a-good-checklist-for-a-serverless-developer-who-wants-to-improve-the-monitoring-of-their-applications">What is a good checklist for a serverless developer who wants to improve the monitoring of their applications?</h2>
<p>This is a common question asked by people who are just getting started with serverless and are interested in, or have been asked to improve the monitoring of their applications.</p>
<h2 id="whats-next">Whats Next?</h2>
<p>So now that you have a scenario and a few questions I recommend you do the following:</p>
<ol>
<li>Try a couple of other models, probably <a href="https://ollama.ai/library/llama2">llama2</a> and <a href="https://ollama.ai/library/orca2">orca</a> are a good starting point.</li>
<li>Learn a bit about prompting by following <a href="https://replicate.com/blog/how-to-prompt-llama">A guide to prompting Llama 2</a> from the replicate blog.</li>
<li>Apply the prompts to your ollama model using a <a href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md">modelfile</a>, which is similar to a <a href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a>.</li>
<li>Try out an uncensored model, something like <a href="https://ollama.ai/library/llama2-uncensored">llama2-uncensored</a> and run through your questions, then ask about breaking into cars or killing processes, which can be a problematic question in some censored models. It is good to understand what censoring a model does, as it can be a useful tool for understanding the risks of using a model.</li>
<li>Start reading more about <a href="https://github.com/premAI-io/state-of-open-source-ai">The State of Open Source AI (2023 Edition)</a>.</li>
</ol>
<h2 id="further-research">Further Research</h2>
<p>Now that you are dabbling with LLMs, and AI, I recommend you try these models for the odd question in your day-to-day work, the local ones running in ollama are restively safe, and they can save you a lot of work.</p>
<p>Also try similar questions with services such as <a href="https://chat.openai.com/">https://chat.openai.com/</a>, hosted services are a powerful tool for adhoc testing and learning. Just be aware of data privacy, and security when using these services.</p>
<p>Once you have some experience you will hopefully even incorporate a model into work projects such as data cleansing, summarizations, or processing of user feedback to help you improve your applications. For this you can use services such as <a href="https://aws.amazon.com/bedrock/">AWS Bedrock</a> on AWS, or <a href="https://cloud.google.com/generative-ai-studio">Generative AI Studio</a> on Google cloud, while following the same methodology to evaluate and select a model for your use case.</p>
<p>If your intrigued and want to go even deeper than these APIs, I recommend you dive into some of the amazing resources on the web for learning how AI and LLMs work, and possibly even develop, or fine tune your own own models.</p>
<ul>
<li><a href="https://www.fast.ai/">fast.ia</a> which provides some great online self paced learning on AI.</li>
<li><a href="https://www.youtube.com/watch?v=zjkBMFhNj_g">A busy persons intro to LLMs</a> great lecture on LLMs.</li>
<li><a href="http://introtodeeplearning.com/">MIT Introduction to Deep Learning</a> for those who want to dive deeper and prefer more of a structured course.</li>
<li><a href="https://www.youtube.com/watch?v=jkrNMKz9pWU">A Hackers&rsquo; Guide to Language Models</a>, another great talk by Jeremy Howard of fast.ai.</li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Avoid accidental exposure of authenticated Amazon API Gateway resources</title>
      <link>https://www.wolfe.id.au/2023/11/12/avoid-accidental-exposure-of-authenticated-amazon-api-gateway-resources/</link>
      <pubDate>Sun, 12 Nov 2023 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/11/12/avoid-accidental-exposure-of-authenticated-amazon-api-gateway-resources/</guid>
      <description>&lt;p&gt;I have been working with &lt;a href=&#34;https://aws.amazon.com/api-gateway/&#34;&gt;Amazon API Gateway&lt;/a&gt; for a while and one thing I noticed is there are a few options for authentication, which can be confusing to developers, and lead to security issues. This post will cover one of the common security pitfalls with API Gateway and how to mitigate it.&lt;/p&gt;
&lt;p&gt;If your using &lt;code&gt;AWS_IAM&lt;/code&gt; authentication on an API Gateway, then make sure you set the default authorizer for all API resources. This will avoid accidental exposing an API if you mis-configure, or omit an authentication method for an API resource as the default is &lt;code&gt;None&lt;/code&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working with <a href="https://aws.amazon.com/api-gateway/">Amazon API Gateway</a> for a while and one thing I noticed is there are a few options for authentication, which can be confusing to developers, and lead to security issues. This post will cover one of the common security pitfalls with API Gateway and how to mitigate it.</p>
<p>If your using <code>AWS_IAM</code> authentication on an API Gateway, then make sure you set the default authorizer for all API resources. This will avoid accidental exposing an API if you mis-configure, or omit an authentication method for an API resource as the default is <code>None</code>.</p>
<p>In addition to this there is a way to apply a resource policy to an API Gateway, which will enforce a specific iam access check on all API requests. Combining the override to default authorizer, and the resource policy allows us to apply multiply layers of protection to our API, allowing us to follow the principle of defense in depth.</p>
<p>So to summarise, to protect your API with IAM authentication is as follows:</p>
<ol>
<li>Enable a default authorizer method on the API Gateway resource.</li>
<li>Enable an authentication method on the API.</li>
<li>Assign an API resource policy which requires IAM authentication to access the API.</li>
</ol>
<p>Doing this with <a href="https://aws.amazon.com/serverless/sam/">AWS SAM</a> is fairly straight forward, to read more about it see the <a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-api-apiauth.html">SAM ApiAuth documentation</a>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">AthenaWorkflowApi</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::Serverless::Api</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="l">...</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">Auth</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="c"># Specify a default authorizer for the API Gateway API to protect against missing configuration</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">DefaultAuthorizer</span><span class="p">:</span><span class="w"> </span><span class="l">AWS_IAM</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="c"># Configure Resource Policy for all methods and paths on an API as an extra layer of protection</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">ResourcePolicy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="c"># The AWS accounts to allow</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">AwsAccountWhitelist</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">           </span>- !<span class="l">Ref AWS::AccountId</span><span class="w">
</span></span></span></code></pre></div><p>Through the magic of AWS SAM this results in a resource policy which looks like the following, this results in all the API methods being protected and only accessible by users authenticated to this account, and only where they are granted access via an IAM policy.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&#34;Version&#34;</span><span class="p">:</span> <span class="s2">&#34;2012-10-17&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&#34;Statement&#34;</span><span class="p">:</span> <span class="p">[</span>
</span></span><span class="line"><span class="cl">    <span class="p">{</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Effect&#34;</span><span class="p">:</span> <span class="s2">&#34;Allow&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Principal&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;AWS&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::123456789012:root&#34;</span>
</span></span><span class="line"><span class="cl">      <span class="p">},</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Action&#34;</span><span class="p">:</span> <span class="s2">&#34;execute-api:Invoke&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Resource&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:execute-api:us-west-2:123456789012:abc123abc1/Prod/POST/athena/run_s3_query_template&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">},</span>
</span></span><span class="line"><span class="cl">    <span class="p">{</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Effect&#34;</span><span class="p">:</span> <span class="s2">&#34;Allow&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Principal&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;AWS&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::123456789012:root&#34;</span>
</span></span><span class="line"><span class="cl">      <span class="p">},</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Action&#34;</span><span class="p">:</span> <span class="s2">&#34;execute-api:Invoke&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Resource&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:execute-api:us-west-2:123456789012:abc123abc1/Prod/POST/athena/run_query_template&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">  <span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>I typically use an openapi spec to define the API, using the extensions provided by AWS such as <code>x-amazon-apigateway-auth</code> to define the authorisation.</p>
<p>With the default authentication set to <code>AWS_IAM</code> hitting an API which is missing <code>x-amazon-apigateway-auth</code> using curl returns the following error.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span><span class="nt">&#34;message&#34;</span><span class="p">:</span><span class="s2">&#34;Missing Authentication Token&#34;</span><span class="p">}</span>
</span></span></code></pre></div><p>With default authentication disabled, and the resource policy enabled the API returns the following error, which illustrates the principle of defense in depth.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span><span class="nt">&#34;Message&#34;</span><span class="p">:</span><span class="s2">&#34;User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-east-1:********9012:abc123abc1/Prod/POST/athena/run_query_template&#34;</span><span class="p">}</span>
</span></span></code></pre></div>]]></content:encoded>
    </item>
    
    <item>
      <title>RIP AWS Go Lambda Runtime</title>
      <link>https://www.wolfe.id.au/2023/08/09/rip-aws-go-lambda-runtime/</link>
      <pubDate>Wed, 09 Aug 2023 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/08/09/rip-aws-go-lambda-runtime/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://aws.amazon.com/&#34;&gt;Amazon Web Services (AWS)&lt;/a&gt; is &lt;a href=&#34;https://aws.amazon.com/blogs/compute/migrating-aws-lambda-functions-from-the-go1-x-runtime-to-the-custom-runtime-on-amazon-linux-2/&#34;&gt;deprecating the &lt;code&gt;go1.x&lt;/code&gt; runtime on Lambda&lt;/a&gt;, this is currently scheduled for December 31, 2023. Customers need to migrate their Go based lambda functions to the &lt;code&gt;al2.provided&lt;/code&gt; runtime, which uses &lt;a href=&#34;https://aws.amazon.com/amazon-linux-2/&#34;&gt;Amazon Linux 2&lt;/a&gt; as the execution environment. I think this is a bad thing for a couple of reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;There is no automated migration path from existing &lt;a href=&#34;https://go.dev&#34;&gt;Go&lt;/a&gt; Lambda functions to the new custom runtime. Customers will need to manually refactor and migrate each function to this new runtime, which this is time-consuming and error-prone.&lt;/li&gt;
&lt;li&gt;This will remove &lt;code&gt;Go1.x&lt;/code&gt; name from the lambda console, Go will now just be another &amp;ldquo;custom&amp;rdquo; runtime instead of a first class supported language. This makes Go development on Lambda seem less official/supported compared to other languages like Node, Python, Java etc.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Case in point, try searching for &amp;ldquo;al2.provided lambda&amp;rdquo; on Google and see how little documentation comes up compared to &amp;ldquo;go1.x lambda&amp;rdquo;. The migration essentially removes the branding and discoverability of Go as a Lambda language, I am sure this will improve over time, but it is still ambiguous.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> is <a href="https://aws.amazon.com/blogs/compute/migrating-aws-lambda-functions-from-the-go1-x-runtime-to-the-custom-runtime-on-amazon-linux-2/">deprecating the <code>go1.x</code> runtime on Lambda</a>, this is currently scheduled for December 31, 2023. Customers need to migrate their Go based lambda functions to the <code>al2.provided</code> runtime, which uses <a href="https://aws.amazon.com/amazon-linux-2/">Amazon Linux 2</a> as the execution environment. I think this is a bad thing for a couple of reasons:</p>
<ol>
<li>There is no automated migration path from existing <a href="https://go.dev">Go</a> Lambda functions to the new custom runtime. Customers will need to manually refactor and migrate each function to this new runtime, which this is time-consuming and error-prone.</li>
<li>This will remove <code>Go1.x</code> name from the lambda console, Go will now just be another &ldquo;custom&rdquo; runtime instead of a first class supported language. This makes Go development on Lambda seem less official/supported compared to other languages like Node, Python, Java etc.</li>
</ol>
<p>Case in point, try searching for &ldquo;al2.provided lambda&rdquo; on Google and see how little documentation comes up compared to &ldquo;go1.x lambda&rdquo;. The migration essentially removes the branding and discoverability of Go as a Lambda language, I am sure this will improve over time, but it is still ambiguous.</p>
<p>There are articles on the advantages of the <code>al2.provided</code> runtime, including how to migrate functions over to it, such as <a href="https://www.capitalone.com/tech/cloud/custom-runtimes-for-go-based-lambda-functions/">https://www.capitalone.com/tech/cloud/custom-runtimes-for-go-based-lambda-functions/</a>.</p>
<h1 id="why-is-this-hard">Why is this hard?</h1>
<p>The main reason migrating Go Lambda functions to the new runtime is difficult is because:</p>
<ol>
<li>Unlike the runtime provided for other languages, the custom runtime doesn&rsquo;t use the <code>Handler</code> parameter to determine the function entry point, this value is ignored, but still required. This is a subtle difference can cause issues if developers are unaware or don&rsquo;t read the documentation closely.</li>
<li>The lambda service doesn&rsquo;t check if the bootstrap entry point exists in the archive, so customers may deploy broken functions if they don&rsquo;t validate this. Sadly, this is NOT very intuitive, and often leads to confusion and errors.</li>
</ol>
<p><strong>Note:</strong> As pointed out by <a href="https://twitter.com/__steele">@Aidan W Steele</a> some deployment tools upload empty archives, then later replace them with an updated archive containing the deployed code, so this could be problematic.</p>
<p>For those interested in what the error looks like if you&rsquo;re missing the bootstrap file, it will return:</p>
<pre tabindex="0"><code>{&#34;errorType&#34;:&#34;Runtime.InvalidEntrypoint&#34;,&#34;errorMessage&#34;:&#34;RequestId: d604d105-51be-49ce-8457-eee1641398eb Error: Couldn&#39;t find valid bootstrap(s): [/var/task/bootstrap /opt/bootstrap]&#34;}
</code></pre><p>If you see this, you need to validate your deployment package contains the required <code>bootstrap</code> file in the root of the zip archive.</p>
<h1 id="why-is-removing-go-1x-a-bad-idea">Why is removing Go 1.x a bad idea?</h1>
<p>There will be no <code>Go</code> Lambda runtime available after this date, this will be more of an issue for developers who have never used AWS and expect lambda to have a Go runtime available out of the box. This is a change that will require some education and guidance for new developers.</p>
<p>Some of the drawbacks of this are:</p>
<ol>
<li>You won&rsquo;t be able to see Go functions directly in the Lambda console anymore. Go will just be another &ldquo;custom&rdquo; runtime instead of a first class supported language like Node, Python, Java etc. This makes Go development on Lambda seem less official/supported.</li>
<li>Developers will find samples or projects using the old Go 1.x runtime that no longer work out of the box. This will lead to confusion as they try to migrate those functions over to the new runtime.</li>
<li>Listing lambda functions by runtime used will no longer show &ldquo;Go1.x&rdquo; making it less clear if a function was written for Go or another language like Rust or Nim that also use the custom runtime.</li>
<li>Finding code samples for Go lambda functions on GitHub or tutorials will need to specify if they are using the old or new runtime. A lot of existing content will be outdated immediately.</li>
</ol>
<h1 id="what-can-aws-do-better">What can AWS do better?</h1>
<p>So what could AWS do to mitigate some of these issues? Here are a few suggestions:</p>
<ol>
<li>Provide an updated <code>go1.al2</code>, which would match the pattern of Java <a href="https://aws.amazon.com/blogs/compute/migrating-aws-lambda-functions-to-al2/">update <code>java8.al2</code> runtime announced a few years ago</a>. This updated runtime would use the same entry point convention as the other languages like Node, Python etc and retain the existing user experience, avoiding the hard coded <code>bootstrap</code> file which is not very intuitive.</li>
<li>Add validation to the deployment process to check for the required bootstrap file, and prevent deployment of invalid archives. This would avoid broken functions being deployed.</li>
</ol>
<p>I am disappointed that AWS did not invest a bit more time in listening to customers around the usability of the <code>al2.provided</code> runtime. Customers are used to compiling applications to a binary with a descriptive name, then deploying that binary to AWS, having to output a specific file called <code>bootstrap</code> is not very intuitive or discoverable.</p>
<h1 id="examples">Examples</h1>
<p>To illustrate the differences I have included some examples, hopefully this helps those not familiar with lambda see the challenges.</p>
<h2 id="building">Building</h2>
<p>This is an example compile command for go based function prior to migration, this will build all commands using the name of the directory as their binary name, then zip up all the binaries with a name ending in <code>-lambda</code>.</p>
<pre tabindex="0"><code>build:
  CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags=&#34;-s -w&#34; -trimpath -o dist ./cmd/...

archive:
  cd dist &amp;&amp; zip -X -9 ./deploy.zip *-lambda
</code></pre><p>With this migration, developers will need to package each of their functions to include a bootstrap file, and upload each archive to s3 individually rather than zipping multiple binaries together.</p>
<h2 id="deployment-configuration-examples">Deployment Configuration Examples</h2>
<p>This is an example sam template for a go based function prior to migration:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">ExampleFunction</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::Serverless::Function</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">Runtime</span><span class="p">:</span><span class="w"> </span><span class="l">go1.x</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># this is a example archive containing one or more go binary files</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">CodeUri</span><span class="p">:</span><span class="w"> </span><span class="l">../../dist/deploy.zip  </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># note example is the name of the compiled go binary file</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">Handler</span><span class="p">:</span><span class="w"> </span><span class="l">example-lambda</span><span class="w">
</span></span></span></code></pre></div><p>This is an example sam template for a go based function after migration:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">ExampleFunction</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::Serverless::Function</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">Runtime</span><span class="p">:</span><span class="w"> </span><span class="l">provided.al2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># example archive which must contain a file named bootstrap, </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># which is not referenced or checked during deploy.</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">CodeUri</span><span class="p">:</span><span class="w"> </span><span class="l">../../dist/example_Linux_arm64.zip</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># unused by this runtime but still required and can </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># cause some confusion with developers if not aware</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">Handler</span><span class="p">:</span><span class="w"> </span><span class="l">nope </span><span class="w">
</span></span></span></code></pre></div><h1 id="closing-thoughts">Closing Thoughts</h1>
<p>While the custom runtime provides better performance, and an updated operating system, the change will require effort for many Go developers on AWS Lambda. Some automated assistance and validation from AWS could help reduce friction and issues from this change.</p>
<p>Personally I am sad to see AWS lambda remove Go as a first class language, as an early adopter of serverless it felt great to have Go supported out of the box. I will miss seeing the gopher logo when browsing functions! 😞🪦</p>
<p>Overall, I think this will negatively the adoption of Go in AWS lambda, at least in the short term, as a lot of developers will find the custom runtime requirements unfamiliar and confusing compared to other runtimes.</p>
<p>As is often the case, new developers will likely struggle most with the <code>provided.al2</code>, then most likely give up and use another language instead of taking the time to understand the custom runtime complexities.</p>
<p>What are your thoughts on the migration and how AWS could improve the experience?</p>
<h1 id="updates">Updates</h1>
<p>Thanks to <a href="https://twitter.com/__steele">@Aidan W Steele</a> for the feedback on my <code>go2.x</code> suggestion with a much better one of <code>go1.al2</code> which would match the pattern of <code>java8.al2</code>, and reminder of the various empty zip file shenanigans used in some deployment tools.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Stop using IAM User Credentials with Terraform Cloud</title>
      <link>https://www.wolfe.id.au/2023/07/17/stop-using-iam-user-credentials-with-terraform-cloud/</link>
      <pubDate>Mon, 17 Jul 2023 07:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/07/17/stop-using-iam-user-credentials-with-terraform-cloud/</guid>
      <description>&lt;p&gt;I recently started using &lt;a href=&#34;https://www.terraform.io/&#34;&gt;Terraform Cloud&lt;/a&gt; but discovered that the &lt;a href=&#34;https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-create-variable-set#create-a-variable-set&#34;&gt;getting started tutorial&lt;/a&gt; which describes how to integrate it with &lt;a href=&#34;https://aws.amazon.com/&#34;&gt;Amazon Web Services (AWS)&lt;/a&gt; suggested using &lt;a href=&#34;https://aws.amazon.com/iam/features/managing-user-credentials/&#34;&gt;IAM user credentials&lt;/a&gt;. This is not ideal as these credentials are long-lived and can lead to security issues.&lt;/p&gt;
&lt;h2 id=&#34;what-is-the-problem-with-iam-user-credentials&#34;&gt;What is the problem with IAM User Credentials?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;IAM User Credentials are long lived, meaning once compromised they allow access for a long time&lt;/li&gt;
&lt;li&gt;They are static, so if leaked it is difficult to revoke access immediately&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But there are better alternatives, the one I recommend is &lt;a href=&#34;https://openid.net/developers/how-connect-works/&#34;&gt;OpenID Connect (OIDC)&lt;/a&gt;, which if you dig deep into the Terraform Cloud docs is a supported approach. This has a few benefits:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently started using <a href="https://www.terraform.io/">Terraform Cloud</a> but discovered that the <a href="https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-create-variable-set#create-a-variable-set">getting started tutorial</a> which describes how to integrate it with <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> suggested using <a href="https://aws.amazon.com/iam/features/managing-user-credentials/">IAM user credentials</a>. This is not ideal as these credentials are long-lived and can lead to security issues.</p>
<h2 id="what-is-the-problem-with-iam-user-credentials">What is the problem with IAM User Credentials?</h2>
<ul>
<li>IAM User Credentials are long lived, meaning once compromised they allow access for a long time</li>
<li>They are static, so if leaked it is difficult to revoke access immediately</li>
</ul>
<p>But there are better alternatives, the one I recommend is <a href="https://openid.net/developers/how-connect-works/">OpenID Connect (OIDC)</a>, which if you dig deep into the Terraform Cloud docs is a supported approach. This has a few benefits:</p>
<ol>
<li>Credentials are dynamically created for each run, so if one set is compromised it does not affect other runs.</li>
<li>When Terraform Cloud authenticates with AWS using OIDC it will pass information about the project and run, so you can enforce IAM policies based on this context.</li>
<li>Credentials are short lived, expiring after the Terraform run completes.</li>
<li>You can immediately revoke access by removing the OIDC provider from AWS.</li>
<li>You don’t need to export credentials from AWS and manage their rotation.</li>
</ol>
<p>Overall this allows for a more secure and scalable approach to integrating Terraform Cloud with AWS. If you are just starting out, I would recommend setting up OpenID Connect integration instead of using IAM credentials.</p>
<h2 id="aws-deployment">AWS Deployment</h2>
<p>To setup the resources on the AWS side required to link AWS to Terraform Cloud we need to deploy some resources, in my case I am using a Cloudformation Template which deploy manually. You can find the source code to this template in my <a href="https://github.com/wolfeidau/terraform-cloud-aws-blog">GitHub Repo</a> along with a Terraform example to deploy the resources.</p>
<p>Using the Cloudformation template as the example for this post, it creates:</p>
<ol>
<li>IAM Role, which assumed by Terraform Cloud when deploying</li>
<li>Open ID Connect Provider, which is used to connect Terraform Cloud to AWS</li>
</ol>
<p>The Terraform Deployment role is as follows:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">TerraformDeploymentRole</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::IAM::Role</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">AssumeRolePolicyDocument</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Statement</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Allow</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Action</span><span class="p">:</span><span class="w"> </span><span class="l">sts:AssumeRoleWithWebIdentity</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Principal</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">Federated</span><span class="p">:</span><span class="w"> </span>!<span class="l">Ref TerraformOIDCProvider</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Condition</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">StringEquals</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">app.terraform.io:aud</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;aws.workload.identity&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">StringLike</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">app.terraform.io:sub</span><span class="p">:</span><span class="w"> </span>!<span class="l">Sub organization:${OrganizationName}:project:${ProjectName}:workspace:${WorkspaceName}:run_phase:*</span><span class="w">
</span></span></span></code></pre></div><p><strong>Note:</strong></p>
<ul>
<li>The IAM role allows Terraform Cloud to assume the role using the OIDC provider, and limits it to the given organization, project and workspace names.</li>
<li>The policy attached to this role, in my example, only allows Terraform to list s3 buckets; you should customise this based on your needs.</li>
</ul>
<p>The Open ID Connect Provider is created as follows:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">TerraformOIDCProvider</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::IAM::OIDCProvider</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">Url</span><span class="p">:</span><span class="w"> </span><span class="l">https://app.terraform.io</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">ClientIdList</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="l">aws.workload.identity</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">ThumbprintList</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="l">9e99a48a9960b14926bb7f3b02e22da2b0ab7280</span><span class="w">
</span></span></span></code></pre></div><p>Once deployed this template will provide two outputs:</p>
<ol>
<li>The role ARN for the Terraform Deployment role.</li>
<li>An Optional Audience value, this is only needed if you want to customise this value.</li>
</ol>
<h2 id="terraform-cloud-configuration">Terraform Cloud Configuration</h2>
<p>You’ll need to set a couple of environment variables in your Terraform Cloud workspace in order to authenticate with AWS using OIDC. You can set these as workspace variables, or if you’d like to share one AWS role across multiple workspaces, you can use a variable set.</p>
<table>
  <thead>
      <tr>
          <th>Variable</th>
          <th>Value</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>TFC_AWS_PROVIDER_AUTH</td>
          <td><code>true</code></td>
      </tr>
      <tr>
          <td>TFC_AWS_RUN_ROLE_ARN</td>
          <td>The role ARN from the cloudformation stack outputs</td>
      </tr>
      <tr>
          <td>TFC_AWS_WORKLOAD_IDENTITY_AUDIENCE</td>
          <td>The optional audience value from the stack outputs. Defaults to <code>aws.workload.identity</code>.</td>
      </tr>
  </tbody>
</table>
<p>Note for more advanced configuration options please refer to <a href="https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/aws-configuration">Terraform Cloud - Dynamic Credentials with the AWS Provider</a>.</p>
<p>That is it, your now ready to run plans in your Terraform Cloud workspace!</p>
<h2 id="auditing">Auditing</h2>
<p>Once you have setup both side of this solution you should be able to see events in <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a>, filter by service <code>sts.amazonaws.com</code> and look at the <code>AssumeRoleWithWebIdentity</code> events. Each event will contain a record of the Terraform Cloud run, and the name of the project and workspace.</p>
<p>This is a cut down cloudtrail event showing the key information:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;userIdentity&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;type&#34;</span><span class="p">:</span> <span class="s2">&#34;WebIdentityUser&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;principalId&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::12121212121212:oidc-provider/app.terraform.io:aws.workload.identity:organization:test-organization:project:Default Project:workspace:test-terraform-cloud:run_phase:plan&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;userName&#34;</span><span class="p">:</span> <span class="s2">&#34;organization:test-organization:project:Default Project:workspace:test-terraform-cloud:run_phase:plan&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;identityProvider&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::12121212121212:oidc-provider/app.terraform.io&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">},</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;eventTime&#34;</span><span class="p">:</span> <span class="s2">&#34;2023-07-18T00:08:34Z&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;eventSource&#34;</span><span class="p">:</span> <span class="s2">&#34;sts.amazonaws.com&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;eventName&#34;</span><span class="p">:</span> <span class="s2">&#34;AssumeRoleWithWebIdentity&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;awsRegion&#34;</span><span class="p">:</span> <span class="s2">&#34;ap-southeast-2&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;sourceIPAddress&#34;</span><span class="p">:</span> <span class="s2">&#34;x.x.x.x&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;userAgent&#34;</span><span class="p">:</span> <span class="s2">&#34;APN/1.0 HashiCorp/1.0 Terraform/1.5.2 (+https://www.terraform.io) terraform-provider-aws/5.7.0 (+https://registry.terraform.io/providers/hashicorp/aws) aws-sdk-go-v2/1.18.1 os/linux lang/go/1.20.5 md/GOOS/linux md/GOARCH/amd64 api/sts/1.19.2&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;requestParameters&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;roleArn&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::12121212121212:role/terraform-cloud-oidc-acces-TerraformDeploymentRole-NOPE&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;roleSessionName&#34;</span><span class="p">:</span> <span class="s2">&#34;terraform-run-abc123&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">},</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;responseElements&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;subjectFromWebIdentityToken&#34;</span><span class="p">:</span> <span class="s2">&#34;organization:test-organization:project:Default Project:workspace:test-terraform-cloud:run_phase:plan&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;assumedRoleUser&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;assumedRoleId&#34;</span><span class="p">:</span> <span class="s2">&#34;CDE456:terraform-run-abc123&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;arn&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:sts::12121212121212:assumed-role/terraform-cloud-oidc-acces-TerraformDeploymentRole-NOPE/terraform-run-abc123&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">},</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;provider&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::12121212121212:oidc-provider/app.terraform.io&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;audience&#34;</span><span class="p">:</span> <span class="s2">&#34;aws.workload.identity&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">},</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;readOnly&#34;</span><span class="p">:</span> <span class="kc">true</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;eventType&#34;</span><span class="p">:</span> <span class="s2">&#34;AwsApiCall&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;recipientAccountId&#34;</span><span class="p">:</span> <span class="s2">&#34;12121212121212&#34;</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><h2 id="links">Links</h2>
<ul>
<li><a href="https://www.wiz.io/blog/how-to-get-rid-of-aws-access-keys-part-1-the-easy-wins">How to get rid of AWS access keys - Part 1: The easy wins</a></li>
<li><a href="https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/aws-configuration">Terraform Cloud - Dynamic Credentials with the AWS Provider</a></li>
<li><a href="https://aws.amazon.com/blogs/apn/simplify-and-secure-terraform-workflows-on-aws-with-dynamic-provider-credentials/">AWS Partner Network (APN) Blog - Simplify and Secure Terraform Workflows on AWS with Dynamic Provider Credentials</a></li>
</ul>
<p>So instead of using IAM User credentials, this approach uses IAM Roles and OpenID Connect to dynamically assign credentials to Terraform Cloud runs which is a big win from a security perspective!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Automated Cloud Security Remediation</title>
      <link>https://www.wolfe.id.au/2023/02/19/automated-cloud-security-remediation/</link>
      <pubDate>Sun, 19 Feb 2023 11:00:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/02/19/automated-cloud-security-remediation/</guid>
      <description>&lt;p&gt;Recently I have been looking into automated security remediation to understand its impacts, positive and negative. As I am a user of AWS, as well other cloud services, I was particularly interested in how it helped maintain security in these environments. As with anything, it is good to understand what problem it is trying to solve and why it exists in the first place.&lt;/p&gt;
&lt;h2 id=&#34;so-firstly-what-does-automated-security-remediation-for-a-cloud-service-do&#34;&gt;So firstly what does automated security remediation for a cloud service do?&lt;/h2&gt;
&lt;p&gt;This is software which detects threats, more specifically misconfigurations of services, and automatically remediates problems.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I have been looking into automated security remediation to understand its impacts, positive and negative. As I am a user of AWS, as well other cloud services, I was particularly interested in how it helped maintain security in these environments. As with anything, it is good to understand what problem it is trying to solve and why it exists in the first place.</p>
<h2 id="so-firstly-what-does-automated-security-remediation-for-a-cloud-service-do">So firstly what does automated security remediation for a cloud service do?</h2>
<p>This is software which detects threats, more specifically misconfigurations of services, and automatically remediates problems.</p>
<h2 id="how-does-automated-security-remediation-work">How does automated security remediation work?</h2>
<p>Typically, security remediation tools take a feed of events from a service such as <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a> (audit logging service) and checks the configuration of the resources being modified. This is typically paired with regular scheduled scans to ensure nothing is missed in the case of dropped or missing events.</p>
<h2 id="can-you-use-iam-to-avoid-security-misconfigurations-in-the-first-place">Can you use IAM to avoid security misconfigurations in the first place?</h2>
<p>Cloud services, such as AWS, have fairly complex <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> services which provide course grained security policy language called IAM policies. These policies are hard to fine tune for the myriad of security misconfigurations deployed by the people working on in these cloud services.</p>
<p>Everyone has seen something like the following administrator policy allowing all permissions for administrators of an AWS environments, this is fine for a &ldquo;sandbox&rdquo; learning account, but is far too permissive for production accounts.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">Version</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;2012-10-17&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">Statement</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">Sid</span><span class="p">:</span><span class="w"> </span><span class="l">AdminAccess</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Allow</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Action</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;*&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Resource</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;*&#39;</span><span class="w">
</span></span></span></code></pre></div><p>That said, authoring IAM policies following the least privilege to cover current requirements, new services coming online and keeping up with emerging threats can be a significant cost in time and resources, and like at some point provide diminishing returns.</p>
<h2 id="can-you-use-aws-service-control-polices-scp-to-avoid-security-misconfigurations">Can you use AWS service control polices (SCP) to avoid security misconfigurations?</h2>
<p>In AWS there is another way to deny specific operations, this comes in the for of service control policies (SCP). These policies are a part of AWS Organizations and provide another layer of control above an account&rsquo;s IAM policies, allowing administrators to target specific operations and protect common resources. Again, these are also very complex to configure and maintain, as they use the same course grained security layer.</p>
<p>Below is an example SCP which prevents any VPC that doesn&rsquo;t already have internet access from getting it.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">Version</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;2012-10-17&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">Statement</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Deny</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">Action</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:AttachInternetGateway</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:CreateInternetGateway</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:CreateEgressOnlyInternetGateway</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:CreateVpcPeeringConnection</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:AcceptVpcPeeringConnection</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">globalaccelerator:Create*</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">globalaccelerator:Update*</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">Resource</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;*&#34;</span><span class="w">
</span></span></span></code></pre></div><p>Investment in SCPs is important for higher level controls, such as disabling the modification of security services such as <a href="https://aws.amazon.com/guardduty/">Amazon GuardDuty</a>, <a href="https://aws.amazon.com/config/">AWS Config</a> and <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a> as changes to these services may result in data loss. That said, SCPs are still dependent on IAMs course grained policy language, which in turn is limited by the service&rsquo;s integration with IAM.</p>
<p>A note about SCPs, often you will see exclusions for roles which enable administrators, or continuous integration and delivery (CI\CD) systems to bypass these policies. These should be used for exceptional situations, for example bootstrapping of services, or incidents. So, using these roles should be gated via some sort incident response process.</p>
<h3 id="so-why-does-automated-security-remediation-exist">So why does automated security remediation exist?</h3>
<p>Given the complexity of managing fine-grained security policies, organizations implement a more reactive solution, which is often in the form of automated security remediation services.</p>
<h3 id="what-are-some-of-disadvantages-of-these-automated-security-remediation-tools">What are some of disadvantages of these automated security remediation tools?</h3>
<ul>
<li>False positives and false negatives: They may generate false positives, where legitimate actions are flagged as security threats, or false negatives, where actual security issues are missed.</li>
<li>Over-reliance on automation: Organizations may become over-reliant on tools, potentially leading to complacency or a lack of human oversight, which can create new risks and vulnerabilities.</li>
<li>Limited scope: They may not be able to detect or remediate all types of security issues or vulnerabilities, especially those that are highly complex or require a more nuanced approach.</li>
<li>Compliance and regulatory issues: Some compliance and regulatory frameworks may require manual security review or approval for certain types of security incidents, which can be challenging to reconcile with automated processes.</li>
<li>Cultural resistance: Some organizations may experience cultural resistance to automated remediation, as it may be perceived as a threat to job security or the role of security professionals.</li>
<li>Delayed or dropped trigger events: Automated remediation typically primarily depend on triggers from audit events provided, these events can be delayed in large AWS environments, or by a flood of activity.</li>
</ul>
<h2 id="what-are-some-of-the-positive-impacts-automated-remediation-tools">What are some of the positive impacts automated remediation tools?</h2>
<ul>
<li>Increased efficiency: Can reduce the time and resources required to respond to security incidents, allowing security teams to focus on higher-value tasks.</li>
<li>Improved collaboration: Can help break down silos between different teams, as it often requires cross-functional collaboration between security, operations, and development teams.</li>
<li>Reduced burnout: By automating repetitive and time-consuming tasks, automated remediation can help reduce burnout among security people, who may otherwise be overwhelmed by the volume of security incidents they need to respond to manually.</li>
<li>Skills development: As organizations adopt these tools and processes, security teams may need to develop new skills and competencies in areas such as automation, scripting, and orchestration, which can have positive impacts on employee development and job satisfaction.</li>
<li>Cultural shift towards proactive security: They can help shift the culture of security within an organization from reactive to proactive, by enabling security teams to identify and remediate potential security risks before they become actual security incidents.</li>
</ul>
<h1 id="summary">Summary</h1>
<p>Overall, while automated security remediation can have some cultural and productivity impacts that need to be managed, it can also bring significant benefits to organizations by enabling more efficient, collaborative, and proactive security practices.</p>
<p>That said, automated security remediation really needs to be part of a three-pronged approach:</p>
<ol>
<li>Ensure people are working in cloud environments with only the privileges they require to do their work. There are of course exceptions to this, but they should be covered with a process which allows users to request more access when required.</li>
<li>SCPs should be used to protect security and governance services, and implement core restrictions within a collection of AWS accounts, depending on your business.</li>
<li>Automated security remediation should be used to cover all the edge cases, again this should be used only where necessary, and with the understanding it may take a period of time to fix.</li>
</ol>
<p>One thing to note is we are working in an environment with a lot of smart and resourceful people, so organizations need to watch for situations complex workarounds evolve to mitigate ineffective or complex controls otherwise they may impact morale, onboarding of staff and overall success of a business.</p>
<p>Security works best when it balances threats and usability!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>My Development Environment</title>
      <link>https://www.wolfe.id.au/2022/07/23/my-development-environment/</link>
      <pubDate>Sat, 23 Jul 2022 22:00:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2022/07/23/my-development-environment/</guid>
      <description>&lt;p&gt;I was inspired by others to document the tools I use working as a software developer professionally, and hacking on side projects out side of work.&lt;/p&gt;
&lt;p&gt;One thing to note is in my day job I work on an Apple Mac, but my personal machine is a Linux laptop running &lt;a href=&#34;https://pop.system76.com/&#34;&gt;PopOS&lt;/a&gt;. I find using Linux as a desktop works as most software I use is web based or supported on linux. I also use it for IoT development as pretty much all the tool chains I use supports it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was inspired by others to document the tools I use working as a software developer professionally, and hacking on side projects out side of work.</p>
<p>One thing to note is in my day job I work on an Apple Mac, but my personal machine is a Linux laptop running <a href="https://pop.system76.com/">PopOS</a>. I find using Linux as a desktop works as most software I use is web based or supported on linux. I also use it for IoT development as pretty much all the tool chains I use supports it.</p>
<p>On a whole over the years I have moved to a more minimal setup, primarily to keep things simple, <strong>less</strong> is easier to maintain, easier to share, and more likely to be adopted by others.</p>
<p>The stack I work with professionally is pretty varied, but can be summarized as:</p>
<ul>
<li><a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a>, I work primarily this cloud platform in my day job</li>
<li><a href="https://aws.amazon.com/cloudformation/">Cloudformation</a>, native AWS infrastructure deployment</li>
<li><a href="https://go.dev">Go</a>, great language for building tools, apis, and backend services</li>
<li><a href="https://www.python.org/">Python</a>, used for cloud orchestration, scripting and machine learning</li>
<li><a href="https://nodejs.org/en/">NodeJS</a> often using <a href="https://www.typescriptlang.org/">Typescript</a>, for frontend development</li>
<li><a href="https://git-scm.com/">Git</a>, used for all things source code</li>
</ul>
<h2 id="cli-tools">CLI Tools</h2>
<p>I primarily use zsh as my shell, sticking to a pretty minimal setup tools wise.</p>
<ul>
<li><a href="https://www.docker.com/">Docker</a> for containers, which I mainly use for testing.</li>
<li><a href="https://direnv.net/">direnv</a> which is used to change environment settings in projects.</li>
<li><a href="https://github.com/ggreer/the_silver_searcher">The Silver Searcher</a> a faster search tool for the cli, <code>ag</code> is my goto for locating stuff in files when developing.</li>
<li><a href="https://github.com/cli/cli">Git Hub CLI</a>, makes working with GitHub from the CLI a dream.</li>
<li><a href="https://awscli.amazonaws.com/v2/documentation/api/latest/index.html">AWS CLI</a> is used to write scripts and diagnosing what is up with my cloud.</li>
<li><a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html">AWS SAM CLI</a> for deploying cloudformation in a semi sane way.</li>
<li><a href="https://github.com/nvm-sh/nvm">nvm</a>, nodejs changes a lot so I often need a couple of versions installed to support both new and old software.</li>
<li><a href="https://github.com/git/git/blob/master/contrib/completion/git-prompt.sh">Git Prompt</a> for a dash more information in my shell about the current trees Git status.</li>
<li><a href="https://gnupg.org/">gnupg</a>, which I mostly use for Signing of Git commits and software, and a bit of data encryption.</li>
</ul>
<p>Most of my builds done using the good old <code>Makefile</code> so I always have <a href="https://www.gnu.org/software/make/">make</a> installed.</p>
<h2 id="editor">Editor</h2>
<p>Currently I use <a href="https://code.visualstudio.com/">vscode</a> when developing, it is one of the first things I open each day. I was a vim user but moved to vscode as I prefer to use a more approachable editor, especially as I work with developers and &ldquo;non tech&rdquo; people and they find it less daunting to learn.</p>
<p>I am trying to help <strong>everyone</strong> code, so using an approachable editor is <strong>really</strong> helpful!</p>
<p>To support the stack I use the following plugins:</p>
<ul>
<li><a href="https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker">Code Spell Checker</a>, I really hate misspelling words in my code.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=EditorConfig.EditorConfig">EditorConfig for VS Code</a>, handy way to keep things consistently formatted across editors when working in a team.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens">GitLens — Git supercharged</a>, helps me figure out what changed and who changed it without leaving my editor.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=golang.Go">Go</a>, primary language I develop in.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=oderwat.indent-rainbow">indent-rainbow</a>, this addon keeps me sane when editing whitespace sensitive languages such as python and YAML!</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=ms-python.python">Python</a>, tons of stuff uses this language so I always end up using it.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=kddejong.vscode-cfn-lint">vscode-cfn-lint</a>, avoiding obvious errors and typos in my cloudformation templates saves a ton of time and frustration.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=wayou.vscode-todo-highlight">TODO Highlight</a>, I always try and add information and notes to my code, this helps highlight the important stuff.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml">YAML</a>, most of the tools I deploy with use it for configuration so I need a good linter.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=GitHub.github-vscode-theme">GitHub Theme</a>, I use the dimmed dark mode which is really nice comfortable coding theme.</li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Diving into AWS Billing Data</title>
      <link>https://www.wolfe.id.au/2022/07/05/diving-into-aws-billing-data/</link>
      <pubDate>Tue, 05 Jul 2022 22:47:59 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2022/07/05/diving-into-aws-billing-data/</guid>
      <description>&lt;p&gt;Billing is an integral part of day to day &lt;a href=&#34;https://aws.amazon.com/&#34;&gt;AWS&lt;/a&gt; account operation, and to most it seems like a chore, however there is a lot to be learnt interacting with &lt;a href=&#34;https://aws.amazon.com/aws-cost-management/aws-billing/&#34;&gt;AWS Billing&lt;/a&gt; data.&lt;/p&gt;
&lt;p&gt;So why would you ever want to dive into AWS Billing data in the first place?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It is pretty easy for both novices, and experience developers to rack up a sizable bill in AWS, part of the learning experience is figuring out how this happened.&lt;/li&gt;
&lt;li&gt;The billing data itself is available in &lt;a href=&#34;https://parquet.apache.org/&#34;&gt;parquet format&lt;/a&gt;, which is a great format to query and dig into with services such as Athena.&lt;/li&gt;
&lt;li&gt;This billing data is the only way of figuring out how much a specific AWS resource costs, this again is helpful for the learning experience.&lt;/li&gt;
&lt;li&gt;The Cost Explorer in AWS is great if you just want an overview, but having SQL access to the data is better for developers looking to dive a bit deeper.&lt;/li&gt;
&lt;li&gt;The billing service has a feature which records &lt;code&gt;created_by&lt;/code&gt; for resources, this is only available in the CUR data. If you have already you can enable it via &lt;a href=&#34;https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html&#34;&gt;Cost Allocation Tags&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These points paired with the fact that a basic understanding of data wrangling in AWS is an invaluable skill to have in your repertoire.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Billing is an integral part of day to day <a href="https://aws.amazon.com/">AWS</a> account operation, and to most it seems like a chore, however there is a lot to be learnt interacting with <a href="https://aws.amazon.com/aws-cost-management/aws-billing/">AWS Billing</a> data.</p>
<p>So why would you ever want to dive into AWS Billing data in the first place?</p>
<ol>
<li>It is pretty easy for both novices, and experience developers to rack up a sizable bill in AWS, part of the learning experience is figuring out how this happened.</li>
<li>The billing data itself is available in <a href="https://parquet.apache.org/">parquet format</a>, which is a great format to query and dig into with services such as Athena.</li>
<li>This billing data is the only way of figuring out how much a specific AWS resource costs, this again is helpful for the learning experience.</li>
<li>The Cost Explorer in AWS is great if you just want an overview, but having SQL access to the data is better for developers looking to dive a bit deeper.</li>
<li>The billing service has a feature which records <code>created_by</code> for resources, this is only available in the CUR data. If you have already you can enable it via <a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html">Cost Allocation Tags</a>.</li>
</ol>
<p>These points paired with the fact that a basic understanding of data wrangling in AWS is an invaluable skill to have in your repertoire.</p>
<h2 id="suggested-cur-solution">Suggested CUR Solution</h2>
<p>I have put together an automated solution which uses <a href="https://aws.amazon.com/cloudformation/">AWS CloudFormation</a> to create a <a href="https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html">Cost and Usage Reports</a> (CUR) in your billing account with a <a href="https://aws.amazon.com/glue/">Glue</a> Table enabling querying of the latest data for each month in <a href="https://aws.amazon.com/athena/">Amazon Athena</a>. This project is on github at <a href="https://github.com/wolfeidau/aws-billing-store">https://github.com/wolfeidau/aws-billing-store</a>, follow the <code>README.md</code> to get it setup.</p>
<p>In summary it deploys:</p>
<ol>
<li>Creates the CUR in the billing service and the bucket which receives the reports.</li>
<li>Configures a Glue Database and Table for use by Athena.</li>
<li>Deploys a Lambda function to manage the partitions using  <a href="https://aws.amazon.com/eventbridge/">Amazon EventBridge</a> <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventBridge.html">S3 events</a>.</li>
</ol>
<p>Once deployed all you need to do is wait till AWS pushes the first report to the solution, this can take up to 8 hours in my experience, then you should be able to log into Athena and start querying the data.</p>
<figure>
    <img loading="lazy" src="/images/2022-07-02_cur_managment_diagram.png"/> <figcaption>
            CUR Solution Diagram
        </figcaption>
</figure>

<p>One thing to note is designed to be a starting point, I have released it under <a href="https://www.apache.org/licenses/LICENSE-2.0">Apache 2.0 license</a> so your welcome to pull this solution apart and integrate it into your environment.</p>
<h2 id="next-steps">Next Steps</h2>
<p>To test the solution you can start with a query which shows you <code>AmazonS3</code> costs grouped by bucket name and aggregated using <code>sum</code>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sql" data-lang="sql"><span class="line"><span class="cl"><span class="k">SELECT</span><span class="w"> </span><span class="n">line_item_resource_id</span><span class="w"> </span><span class="k">as</span><span class="w"> </span><span class="n">bucket_name</span><span class="p">,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="n">round</span><span class="p">(</span><span class="k">sum</span><span class="p">(</span><span class="n">line_item_blended_cost</span><span class="p">),</span><span class="w"> </span><span class="mi">4</span><span class="p">)</span><span class="w"> </span><span class="k">AS</span><span class="w"> </span><span class="n">cost</span><span class="p">,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">month</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="k">from</span><span class="w"> </span><span class="s2">&#34;raw_cur_data&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="k">WHERE</span><span class="w"> </span><span class="k">year</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">&#39;2022&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">and</span><span class="w"> </span><span class="k">month</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">&#39;7&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">AND</span><span class="w"> </span><span class="n">line_item_product_code</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">&#39;AmazonS3&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="k">GROUP</span><span class="w"> </span><span class="k">BY</span><span class="w"> </span><span class="n">line_item_resource_id</span><span class="p">,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">month</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="k">ORDER</span><span class="w"> </span><span class="k">BY</span><span class="w"> </span><span class="n">cost</span><span class="w"> </span><span class="k">DESC</span><span class="p">;</span><span class="w">
</span></span></span></code></pre></div><p>There are some great resources with other more advanced queries which provide insights from your CUR data, one of the best is <a href="https://wellarchitectedlabs.com/cost/300_labs/300_cur_queries/">Level 300: AWS CUR Query Library</a> from the <a href="https://wellarchitectedlabs.com/">The Well-Architected Labs website</a>.</p>
<p>The standout queries for me are:</p>
<ol>
<li><a href="https://wellarchitectedlabs.com/cost/300_labs/300_cur_queries/queries/security_identity__compliance/#amazon-guardduty">Amazon GuardDuty</a> - This query provides daily unblended cost and usage information about Amazon GuardDuty Usage. The usage amount and cost will be summed.</li>
<li><a href="https://wellarchitectedlabs.com/cost/300_labs/300_cur_queries/queries/storage/#amazon-s3">Amazon S3</a> - This query provides daily unblended cost and usage information for Amazon S3. The output will include detailed information about the resource id (bucket name), operation, and usage type. The usage amount and cost will be summed, and rows will be sorted by day (ascending), then cost (descending).</li>
</ol>
<h2 id="cost-allocation-tags">Cost Allocation Tags</h2>
<p>The <a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html">Cost Allocation Tags</a> in billing allows you to record data which is included in the CUR. This is a great resource for attributing cost to a user, role or service, or alternatively a cloudformation stack.</p>
<p>I enable the following AWS tags for collection and inclusion in the CUR.</p>
<ul>
<li><code>aws:cloudformation:stack-name</code></li>
<li><code>aws:createdBy</code></li>
</ul>
<p>I also enable some of my own custom tags for collection and inclusion in the CUR.</p>
<ul>
<li><code>application</code></li>
<li><code>component</code></li>
<li><code>branch</code></li>
<li><code>environment</code></li>
</ul>
<p>You can see how these are added in the <a href="https://github.com/wolfeidau/aws-billing-store">https://github.com/wolfeidau/aws-billing-store</a> project <code>Makefile</code> when the stacks are launched.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>GitHub Actions supply chain attacks</title>
      <link>https://www.wolfe.id.au/2021/04/26/github-actions-supply-chain-attacks/</link>
      <pubDate>Mon, 26 Apr 2021 19:30:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2021/04/26/github-actions-supply-chain-attacks/</guid>
      <description>&lt;p&gt;There has been a lot of press about supply chain attacks recently, these type of attacks are nothing new and understanding them is really important for developers using services such as &lt;a href=&#34;https://github.com/features/actions&#34;&gt;GitHub Actions&lt;/a&gt;, given Continuos integration (CI) tools are a critical part of supply chain used in software projects.&lt;/p&gt;
&lt;p&gt;A supply chain attack targets less secure parts of the development process, this could be the tools and services you depend on, or the docker containers you host your software in. These attacks come in different forms but some examples are:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There has been a lot of press about supply chain attacks recently, these type of attacks are nothing new and understanding them is really important for developers using services such as <a href="https://github.com/features/actions">GitHub Actions</a>, given Continuos integration (CI) tools are a critical part of supply chain used in software projects.</p>
<p>A supply chain attack targets less secure parts of the development process, this could be the tools and services you depend on, or the docker containers you host your software in. These attacks come in different forms but some examples are:</p>
<ul>
<li>Extract credentials from your CI services like the <a href="https://about.codecov.io/security-update/">Codecov security incident</a>.</li>
<li>Seed malware for an attack down stream on your customers like the <a href="https://krebsonsecurity.com/tag/solarwinds-breach/">solarwinds breach</a>.</li>
</ul>
<p>In this post I am going to dive into an example of an attack that affected a lot of projects using GitHub Actions recently, but this could be applied more broadly to any CI tool or service relying on third party services or code.</p>
<h1 id="why-is-the-codecov-security-incident-interesting">Why is the Codecov security incident interesting?</h1>
<p>The <a href="https://about.codecov.io/security-update/">Codecov security incident</a> illustrates a novel attack on a popular developer tool, which in turn exposed a number of CI integrations including the widely used GitHub Actions.</p>
<p>The initial attack happened in January when the Codecov Bash uploader script was modified in a cloud storage service.</p>
<p>This script provides a language-agnostic alternative for sending your coverage reports to Codecov and is used at least 5 of the Codecov continuos integration (CI) integrations.</p>
<p>The GitHub Action Codecov was one of them, it downloads and executes the script each time it is run and critically didn&rsquo;t verify the checksum of this file against the release so it continued working.</p>
<p>The modified script extracted all environment variables in that workflow and uploaded them to a website operated by the attacker.</p>
<p>These variables are often used to pass credentials into the workflow for services such as <a href="https://hub.docker.com">Docker Hub</a>, <a href="https://www.npmjs.com/">NPM</a>, cloud storage buckets and other software distribution services.</p>
<p>The extraction of these credentials while this exploit was active could lead to the modifications of builds and other artifacts resulting in further exploits and extending the footprint of this attack.</p>
<p>Most concerning is this exploit was effectively sitting in the supply chain of <a href="https://github.com/search?l=&amp;q=codecov-action&#43;language%3AYAML&amp;type=code">1000s of open source</a> and proprietary workflows extracting data undetected for approximately 4 months.</p>
<p><strong>Note:</strong> It is worth reading the <a href="https://about.codecov.io/security-update/">security update</a> posted by Codecov as it highlights some of the steps you need to take if you are effected by this sort of attack.</p>
<h1 id="what-can-you-do-to-mitigate-these-sorts-of-attacks">What can you do to mitigate these sorts of attacks?</h1>
<p>To ensure your GitHub actions secure I recommend:</p>
<ul>
<li>Read the <a href="https://docs.github.com/en/actions/learn-github-actions/security-hardening-for-github-actions">GitHub actions hardening</a> documentation.</li>
<li>Limit exposure of secrets to only the projects and repositories which need these values by implementing <a href="https://github.blog/2021-04-13-implementing-least-privilege-for-secrets-in-github-actions/">least privilege for secrets in GitHub Actions</a>.</li>
<li>Read about <a href="https://securitylab.github.com/research/github-actions-untrusted-input/">Keeping your GitHub Actions and workflows secure: Untrusted input</a>.</li>
<li>Regularly rotate the credentials used in your GitHub actions. this helps mitigate historical backups or logs being leaked by a service.</li>
<li>If an action is supplied and supported by a vendor, ensure emails or advisories are sent to a shared email box, and not attached to a personal email. This will enable monitoring by more than one person, and enable you to go on holidays.</li>
</ul>
<p>For actions which have access to important secrets, like those used to upload your software libraries and releases, or deploying your services, you may want to fork them and add security scanning. This is more important if there are no vendor supported alternatives, or it is a less widely supported technology.</p>
<h1 id="reviewing-your-actions">Reviewing your actions</h1>
<p>Given we all still want the benefits of services such as GitHub Actions while also managing the risks we need to maintain a balance between getting the most out of the service and limiting possible exploits.</p>
<p>The first step is to review the GitHub actions your using in your workflows, just like you would for open source libraries:</p>
<ul>
<li>How active are these projects? Are PRs merged / reviewed in a timely manner?</li>
<li>Is the author known for building good quality software and build automation?</li>
<li>Are these actions supported by a company or service?</li>
<li>Does the project have a security policy?</li>
</ul>
<p>When it comes to open source GitHub Actions you need to be aware that most open source licenses limit liability for the author, this means you as a consumer need to actively manage some of the risks running this software. Performing maintenance, bug fixes and contributing to the upkeep of the software is key to ensuring the risk of exploits is minimized.</p>
<p>Lastly run some internal training or workshops around supply chain attacks in your company, this could involve running a scenario like the Codecov incident as a <a href="https://blog.rsisecurity.com/how-to-perform-a-security-incident-response-tabletop-exercise/">table top exercise</a>.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Why isn&#39;t my s3 bucket secure?</title>
      <link>https://www.wolfe.id.au/2020/10/08/why-isnt-my-s3-bucket-secure/</link>
      <pubDate>Thu, 08 Oct 2020 19:30:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2020/10/08/why-isnt-my-s3-bucket-secure/</guid>
      <description>&lt;p&gt;We have all read horror stories of &lt;a href=&#34;https://aws.amazon.com/s3/&#34;&gt;Amazon Simple Storage Service&lt;/a&gt; (S3) buckets being “hacked” in the popular media, and we have seen lots of work by &lt;a href=&#34;https://aws.amazon.com&#34;&gt;Amazon Web Services&lt;/a&gt; (AWS) to tighten up controls and messaging around best practices. So how do the amazon tools help you avoid some of the pitfalls with S3?&lt;/p&gt;
&lt;p&gt;Case in point, the &lt;a href=&#34;https://aws.amazon.com/cli/&#34;&gt;AWS CLI&lt;/a&gt; which a large number of engineers and developers rely on every day, the following command will create a bucket.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have all read horror stories of <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service</a> (S3) buckets being “hacked” in the popular media, and we have seen lots of work by <a href="https://aws.amazon.com">Amazon Web Services</a> (AWS) to tighten up controls and messaging around best practices. So how do the amazon tools help you avoid some of the pitfalls with S3?</p>
<p>Case in point, the <a href="https://aws.amazon.com/cli/">AWS CLI</a> which a large number of engineers and developers rely on every day, the following command will create a bucket.</p>
<pre tabindex="0"><code>$ aws s3 mb s3://my-important-data
</code></pre><p>One would assume this commonly referenced example which is used in a lot of the resources provided by AWS would create a bucket following the best practices. But alas no…</p>
<p>The configuration which is considered <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html">best practice for security of an S3 bucket</a> missing is:</p>
<ul>
<li>Enable Default Encryption</li>
<li>Block Public access configuration</li>
<li>Enforce encryption of data in transit (HTTPS)</li>
</ul>
<h2 id="why-is-this-a-problem">Why is this a Problem?</h2>
<p>I personally have a lot of experience teaching developers how to get started in AWS, and time and time again it is lax defaults which let this cohort down. Of course this happens a lot while they are just getting started.</p>
<p>Sure there are <a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls.html">guard rails</a> implemented using services such as AWS Security Hub, pointing out issues left right and center, but these typically identity problems which wouldn&rsquo;t be there in the first place if the tools where providing better defaults.</p>
<p>Sure there is more advanced configuration but <strong>encryption</strong> and blocking <strong>public access</strong> by default seem like a good start, and would reduce the noise of these tools.</p>
<p>The key point here is it should be hard for new developers to avoid these recommended, and recognised best practices when creating an S3 bucket.</p>
<p>In addition to this, keeping up with the ever growing list of “best practice” configuration is really impacting both velocity and morale of both seasoned, and those new the platform. Providing some tools which help developers keep up, and provide some uplift when upgrading existing infrastructure would be a boon.</p>
<p>Now this is especially the case for developers building solutions using <em>serverless</em> as they tend to use more of the AWS native services, and in turn trigger more of these “guard rails”.</p>
<p>Lastly there are a lot of developers out there who just don&rsquo;t have time to &ldquo;harden&rdquo; their environments, teams who have no choice but to ignore &ldquo;best practices&rdquo; and may benefit a lot from some uplift in this area.</p>
<h2 id="what-about-cloudformation">What about Cloudformation?</h2>
<p>To further demonstrate this issue this is s3 bucket creation in <a href="https://aws.amazon.com/cloudformation/">cloudformation</a>, which is the baseline orchestration tool for building resources, provided free of charge by AWS. This is a very basic example, as seen in a lot of projects on GitHub, and the <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html#aws-properties-s3-bucket--examples">AWS cloudformation documentation</a>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MyDataBucket</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::S3::Bucket</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">BucketName</span><span class="p">:</span><span class="w"> </span><span class="l">MyDataBucket</span><span class="w">
</span></span></span></code></pre></div><p>Now you could argue that cloudformation is doing exactly what you tell it to do, it is just a primitive layer which translates YAML or JSON into API calls to AWS, but I think again this is really letting developers down.</p>
<p>Again this is missing default encryption, and public access safe guards. Now in addition to this a lot of quality tools also recommend the following:</p>
<ul>
<li>Explicit deny of Delete* operations, good practice for systems of record</li>
<li>Enable Versioning, optional but good practice for systems of record</li>
<li>Enable object access logging, which is omitted it to keep the example brief</li>
</ul>
<p>So this is a basic example with most of these options enabled, this is quite a lot to fill in for yourself.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">     </span><span class="nt">MyDataBucket</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::S3::Bucket</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">DeletionPolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Retain</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">UpdateReplacePolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Retain</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">BucketName</span><span class="p">:</span><span class="w"> </span>!<span class="l">Ref BucketName</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">BucketEncryption</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">ServerSideEncryptionConfiguration</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span>- <span class="nt">ServerSideEncryptionByDefault</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span><span class="nt">SSEAlgorithm</span><span class="p">:</span><span class="w"> </span><span class="l">AES256</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">VersioningConfiguration</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Status</span><span class="p">:</span><span class="w"> </span><span class="l">Enabled</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">PublicAccessBlockConfiguration</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">BlockPublicAcls</span><span class="p">:</span><span class="w"> </span><span class="kc">True</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">BlockPublicPolicy</span><span class="p">:</span><span class="w"> </span><span class="kc">True</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">IgnorePublicAcls</span><span class="p">:</span><span class="w"> </span><span class="kc">True</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">RestrictPublicBuckets</span><span class="p">:</span><span class="w"> </span><span class="kc">True</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MyDataBucketPolicy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::S3::BucketPolicy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">Bucket</span><span class="p">:</span><span class="w"> </span>!<span class="l">Ref MyDataBucket</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">PolicyDocument</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Id</span><span class="p">:</span><span class="w"> </span><span class="l">AccessLogBucketPolicy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Version</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;2012-10-17&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Statement</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span>- <span class="nt">Sid</span><span class="p">:</span><span class="w"> </span><span class="l">AllowSSLRequestsOnly</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Action</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span>- <span class="l">s3:*</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Deny</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Resource</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span>- !<span class="l">Sub &#34;arn:aws:s3:::${MyDataBucket}/*&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span>- !<span class="l">Sub &#34;arn:aws:s3:::${MyDataBucket}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Condition</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span><span class="nt">Bool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                    </span><span class="nt">&#34;aws:SecureTransport&#34;: </span><span class="s2">&#34;false&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Principal</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;*&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span>- <span class="nt">Sid</span><span class="p">:</span><span class="w"> </span><span class="l">Restrict Delete* Actions</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Action</span><span class="p">:</span><span class="w"> </span><span class="l">s3:Delete*</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Deny</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Principal</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;*&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Resource</span><span class="p">:</span><span class="w"> </span>!<span class="l">Sub &#34;arn:aws:s3:::${MyDataBucket}/*&#34;</span><span class="w">
</span></span></span></code></pre></div><p>To do this with the AWS CLI in one command would require quite a few flags, and options, rather than including that here I will leave that exercise up to the reader.</p>
<p>Now some may say this is a great opportunity for consulting companies to endlessly uplift customer infrastructure. But this again begs the questions:</p>
<ol>
<li>Why is this the case for customers using the recommended tools?</li>
<li>What about developers getting started on their first application?</li>
<li>Wouldn&rsquo;t be better to have these consultants building something new, rather than crafting reams of YAML?</li>
</ol>
<h2 id="why-provide-resources-which-are-secure-by-default">Why Provide Resources which are Secure by Default?</h2>
<p>So I have used S3 buckets as a very common example, but there is an ever growing list of services in the AWS that I think would benefit from better default configuration.</p>
<p>Just to summarise some of the points I have made above:</p>
<ol>
<li>It would make it harder for those new to the cloud to do the wrong thing when following examples.</li>
<li>The cost of building and maintaining infrastructure would be reduced over time as safer defaults would remove the need for pages of code to deploy “secure” s3 buckets.</li>
<li>For new and busy developers things would be mostly right from the beginning, and likewise update that baseline even just for new applications, leaving them more time to do the actual work they should be doing.</li>
</ol>
<p>So anyone who is old enough to remember <a href="https://en.wikipedia.org/wiki/Solaris_%28operating_system%29">Sun Solaris</a> will recall the “secure by default” effort launched with Solaris 10 around 2005, this also came with “self healing” (stretch goal for AWS?), so security issues around defaults is not a new problem, but has been addressed before!</p>
<h2 id="follow-up-qa">Follow Up Q&amp;A</h2>
<p>I have added some of the questions I received while reviewing this article, with some answers I put together.</p>
<h4 id="will-cdk-help-with-this-problem-of-defaults">Will CDK help with this problem of defaults?</h4>
<p>So as it stands now I don&rsquo;t believe the default s3 bucket construct has any special default settings, there is certainly room for someone to make &ldquo;secure&rdquo; versions of the constructs but developers would need to search for them and that kind of misses the point of helping wider AWS user community.</p>
<h4 id="why-dont-you-just-write-your-own-cli-to-create-buckets">Why don&rsquo;t you just write your own CLI to create buckets?</h4>
<p>This is a good suggestion, however I already have my fair share of side projects, if I was to do this it would need to be championed by a orginisation, and team that got value from the effort. But again, needing to tell every new engineer to ignore the default AWS CLI as it isn&rsquo;t &ldquo;secure&rdquo; seems to be less than ideal, I really want everyone to be &ldquo;secure&rdquo;.</p>
<h4 id="how-did-you-come-up-with-this-topic">How did you come up with this topic?</h4>
<p>Well I am currently working through &ldquo;retrofitting&rdquo; best practices (the latest ones) on a bunch of aws serverless stacks which I helped build a year or so ago, this is when I asked the question why am I searching then helping to document what is &ldquo;baseline&rdquo; configuration for s3 buckets?!</p>
<h4 id="wont-this-make-the-tools-more-complicated-adding-all-these-best-practices">Won&rsquo;t this make the tools more complicated adding all these best practices?</h4>
<p>I think any uplift at all would be a bonus at the moment, I don&rsquo;t think it would be wise to take on every best practice out there, but surely the 80/20 rule would apply here. Anything to reduce the amount of retro fitting we need to do would be a good thing in my view.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>AWS Events reading list</title>
      <link>https://www.wolfe.id.au/2020/03/31/aws-events-reading-list/</link>
      <pubDate>Tue, 31 Mar 2020 04:30:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2020/03/31/aws-events-reading-list/</guid>
      <description>&lt;p&gt;For some time now I have been working on internal, and some product related services which use AWS events, some of this has been paired with &lt;a href=&#34;https://docs.aws.amazon.com/appsync/latest/devguide/real-time-data.html&#34;&gt;AppSync subscriptions&lt;/a&gt;, &lt;a href=&#34;https://slack.com/&#34;&gt;slack&lt;/a&gt; and &lt;a href=&#34;https://aws.amazon.com/sns/&#34;&gt;AWS SNS&lt;/a&gt;. To help everyone come up to speed with events, and async messaging in general in a world of REST and synchronous APIs I have been compiling a list of links, which I thought I would share in a post.&lt;/p&gt;
&lt;p&gt;To start out it is helpful to have an overview, this post and the associated talk &lt;a href=&#34;https://www.youtube.com/watch?v=h46IquqjF3E&#34;&gt;Moving to event-driven architectures (SVS308-R1)&lt;/a&gt; are a good place to start.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For some time now I have been working on internal, and some product related services which use AWS events, some of this has been paired with <a href="https://docs.aws.amazon.com/appsync/latest/devguide/real-time-data.html">AppSync subscriptions</a>, <a href="https://slack.com/">slack</a> and <a href="https://aws.amazon.com/sns/">AWS SNS</a>. To help everyone come up to speed with events, and async messaging in general in a world of REST and synchronous APIs I have been compiling a list of links, which I thought I would share in a post.</p>
<p>To start out it is helpful to have an overview, this post and the associated talk <a href="https://www.youtube.com/watch?v=h46IquqjF3E">Moving to event-driven architectures (SVS308-R1)</a> are a good place to start.</p>
<ul>
<li><a href="https://www.tbray.org/ongoing/When/202x/2020/03/07/Eventing-Facets">Tim Bray - Eventing Facets</a></li>
</ul>
<p>Then for those that want to see some code, take a look at the analytics component in this project developed by the serverless team at AWS, there are tons of great infra examples in this project. Although the code is a bit complex there is a lot to garner even if your not a Java developer.</p>
<ul>
<li><a href="https://github.com/awslabs/realworld-serverless-application">awslabs/realworld-serverless-application</a></li>
</ul>
<p>This project uses a great reusable component which takes a <a href="https://aws.amazon.com/dynamodb">AWS DynamoDB</a> stream and publishes it onto <a href="https://aws.amazon.com/eventbridge">AWS Eventbridge</a>, again if Java isn&rsquo;t your language of choice there are still some gems in here, such as the logic used to <a href="https://github.com/awslabs/aws-dynamodb-stream-eventbridge-fanout/blob/master/src/main/java/com/amazonaws/dynamodb/stream/fanout/publisher/EventBridgeRetryClient.java">retry submission of events to Eventbridge</a>.</p>
<ul>
<li><a href="https://github.com/awslabs/aws-dynamodb-stream-eventbridge-fanout">awslabs/aws-dynamodb-stream-eventbridge-fanout</a></li>
</ul>
<p>From the AWS Samples comes this project which is worth digging into, it has a bunch of simple examples with diagrams which are always a plus.</p>
<ul>
<li><a href="https://github.com/aws-samples/aws-serverless-ecommerce-platform/tree/master/orders">aws-samples/aws-serverless-ecommerce-platform</a></li>
</ul>
<p>To enable some experimentation and development this CLI tool is pretty handy.</p>
<ul>
<li><a href="https://github.com/spezam/eventbridge-cli">spezam/eventbridge-cli</a></li>
</ul>
<p>As I go I will add links and happy to take suggestions.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>How do I Structure my Go Project?</title>
      <link>https://www.wolfe.id.au/2020/03/10/how-do-i-structure-my-go-project/</link>
      <pubDate>Tue, 10 Mar 2020 04:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2020/03/10/how-do-i-structure-my-go-project/</guid>
      <description>&lt;p&gt;Assuming you read my &lt;a href=&#34;https://www.wolfe.id.au/2020/03/10/starting-a-go-project/&#34;&gt;Starting a Go Project&lt;/a&gt; post you should have the starting point for a minimal go web service. For your first project it is easier to keep all your code in one folder, in the base of your project, but at some point you will want to restructure things, this is done for a few of reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Having everything in one folder results in a lot of inter dependencies in the code.&lt;/li&gt;
&lt;li&gt;Reuse outside the project can be difficult as the code is only designed to be used in one package.&lt;/li&gt;
&lt;li&gt;It is impossible to have more than one binary, as you can have only one &lt;code&gt;main&lt;/code&gt; method.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This post will provide an overview of the structure I follow in my Go projects when building web services.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Assuming you read my <a href="https://www.wolfe.id.au/2020/03/10/starting-a-go-project/">Starting a Go Project</a> post you should have the starting point for a minimal go web service. For your first project it is easier to keep all your code in one folder, in the base of your project, but at some point you will want to restructure things, this is done for a few of reasons:</p>
<ul>
<li>Having everything in one folder results in a lot of inter dependencies in the code.</li>
<li>Reuse outside the project can be difficult as the code is only designed to be used in one package.</li>
<li>It is impossible to have more than one binary, as you can have only one <code>main</code> method.</li>
</ul>
<p>This post will provide an overview of the structure I follow in my Go projects when building web services.</p>
<p><strong>Note:</strong> If your just building a library to use in your services, or share with others, it is OK to put everything in the base folder of your project, an example of this is my <a href="https://github.com/wolfeidau/dynastore">dynastore</a> library.</p>
<h2 id="cmd">/cmd</h2>
<p>This folder contains the main application entry point files for the project, with the directory name matching the name for the binary. So for example <code>cmd/simple-service</code> meaning that the binary we publish will be <code>simple-service</code>.</p>
<h2 id="internal">/internal</h2>
<p>This package holds the private library code used in your service, it is specific to the function of the service and not shared with other services. One thing to note is this privacy is enforced by the compiler itself, see the <a href="https://golang.org/doc/go1.4#%23internalpackages">Go 1.4 release notes</a> for more details.</p>
<h2 id="pkg">/pkg</h2>
<p>This folder contains code which is OK for other services to consume, this may include API clients, or utility functions which may be handy for other projects but don&rsquo;t justify their own project. Personally I prefer to use this over <code>internal</code>, mainly as I like to keep things open for reuse in most of projects.</p>
<h2 id="project-structure">Project Structure</h2>
<p>As you build out your project there are some very important goals you should consider when it comes to how you structure your packages:</p>
<ul>
<li>Keep things consistent</li>
<li>Keep things as simple as possible, but no simpler</li>
<li>Loosely couple sections of the service or application</li>
<li>Aim to ensure it is easy to navigate your way around</li>
</ul>
<p>Overall when getting started you should experiment a bit, try a few different ideas when building out your first and get some feedback on based on the above goals.</p>
<p>The number one objective is to you build easy to maintain, consistent and reliable software.</p>
<h2 id="example">Example</h2>
<p>I recommend taking a look at <a href="https://github.com/wolfeidau/exitus">exitus</a> to see how I structure my projects, most of the code is under the <code>pkg</code> folder with each sub folder having one or more files. From the top level it is pretty clear what each package relates to, and although lean on tests it has a few examples.</p>
<pre tabindex="0"><code>$ tree exitus/
 exitus/
├── cmd
│   ├── authtest
│   │   └── main.go
│   ├── backend
│   │   └── main.go
│   └── client
│       └── main.go
├── dev
│   ├── add_migration.sh
│   └── docker-compose.yml
├── Dockerfile
├── go.mod
├── go.sum
│   ├── 20190721131113_extensions.down.sql
│   ├── 20190721131113_extensions.up.sql
│   ├── 20190723044115_customer_projects.down.sql
│   ├── 20190723044115_customer_projects.up.sql
│   ├── 20190726175158_issues.down.sql
│   ├── 20190726175158_issues.up.sql
│   ├── 20190726201649_comments.down.sql
│   ├── 20190726201649_comments.up.sql
│   ├── bindata.go
│   ├── gen.go
│   ├── migrations_test.go
│   └── README.md
├── pkg
│   ├── api
│   │   ├── exitus.gen.go
│   │   ├── exitus.yml
│   │   └── gen.go
│   ├── auth
│   │   ├── scopes.go
│   │   └── user.go
│   ├── conf
│   │   ├── conf.go
│   │   └── conf_test.go
│   ├── db
│   │   ├── db.go
│   │   ├── dbtesting.go
│   │   ├── migrate.go
│   │   ├── sqlhooks.go
│   │   └── transactions.go
│   ├── env
│   │   └── env.go
│   ├── healthz
│   │   ├── healthz.go
│   │   └── healthz_test.go
│   ├── jwt
│   │   └── jwt.go
│   ├── metrics
│   │   └── metrics.go
│   ├── middleware
│   │   ├── jwt.go
│   │   └── middleware.go
│   ├── oidc
│   │   └── client.go
│   ├── server
│   │   ├── reflect.go
│   │   └── server.go
│   └── store
│       ├── comments.go
│       ├── comments_test.go
│       ├── customers.go
│       ├── customers_test.go
│       ├── issues.go
│       ├── issues_test.go
│       ├── migrate_test.go
│       ├── projects.go
│       ├── projects_test.go
│       └── store.go
└── README.md
</code></pre><p>The aim here is to illustrate how you grow your project from a couple of files, to a larger web service. I encourage you to trawl through github projects and dig into how other developers have structured theirs, and most of all try it out yourself!</p>
<h1 id="references">References</h1>
<ul>
<li><a href="https://www.youtube.com/watch?v=PTE4VJIdHPg">GopherCon EU 2018: Peter Bourgon - Best Practices for Industrial Programming</a></li>
<li><a href="https://github.com/golang-standards/project-layout">Standard Go Project Layout</a></li>
<li><a href="https://www.youtube.com/watch?v=oL6JBUk6tj0">GopherCon 2018: Kat Zien - How Do You Structure Your Go Apps</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Starting a Go Project</title>
      <link>https://www.wolfe.id.au/2020/03/10/starting-a-go-project/</link>
      <pubDate>Tue, 10 Mar 2020 04:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2020/03/10/starting-a-go-project/</guid>
      <description>&lt;p&gt;Given the changes with &lt;a href=&#34;https://blog.golang.org/using-go-modules&#34;&gt;Go Modules&lt;/a&gt; I wanted to document a brief getting started for Go projects, this will focus on building a minimal web service.&lt;/p&gt;
&lt;p&gt;Before you start you will need to install Go, I recommend using &lt;a href=&#34;https://brew.sh/&#34;&gt;homebrew&lt;/a&gt; or for ubuntu users &lt;a href=&#34;https://launchpad.net/~longsleep/&amp;#43;archive/ubuntu/golang-backports&#34;&gt;Golang Backports&lt;/a&gt;, or as last resort grab it from the &lt;a href=&#34;https://golang.org/dl/&#34;&gt;Go Downloads&lt;/a&gt; page.&lt;/p&gt;
&lt;p&gt;So this looks like this for OSX.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;brew install go
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Or for &lt;a href=&#34;https://ubuntu.com/&#34;&gt;ubuntu&lt;/a&gt; we add the PPA, then install golang 1.14 and update our path.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Given the changes with <a href="https://blog.golang.org/using-go-modules">Go Modules</a> I wanted to document a brief getting started for Go projects, this will focus on building a minimal web service.</p>
<p>Before you start you will need to install Go, I recommend using <a href="https://brew.sh/">homebrew</a> or for ubuntu users <a href="https://launchpad.net/~longsleep/&#43;archive/ubuntu/golang-backports">Golang Backports</a>, or as last resort grab it from the <a href="https://golang.org/dl/">Go Downloads</a> page.</p>
<p>So this looks like this for OSX.</p>
<pre tabindex="0"><code>brew install go
</code></pre><p>Or for <a href="https://ubuntu.com/">ubuntu</a> we add the PPA, then install golang 1.14 and update our path.</p>
<pre tabindex="0"><code>sudo add-apt-repository ppa:longsleep/golang-backports
sudo apt-get update
sudo install golang-1.14
echo &#39;export PATH=$PATH:/usr/lib/go-1.14/bin&#39; &gt;&gt; ~/.bashrc
source ~/.bashrc
</code></pre><p>Now we should be able to run.</p>
<pre tabindex="0"><code>go version
</code></pre><p>Now navigate to where you build your projects, for me this is <code>~/Code/goprojects</code> and make a folder. One thing to note here is that this <code>goprojects</code> folder is not in my <code>$GOPATH</code> as we are using modules.</p>
<pre tabindex="0"><code>cd ~/Code/goprojects
mkdir simple-go-service
cd simple-go-service
</code></pre><p>Before we start adding code lets initiliase our project, you should replace <em>USERNAME</em> with your github username, for me it is <a href="https://github.com/wolfeidau">wolfeidau</a>.</p>
<pre tabindex="0"><code>go mod init github.com/USERNAME/simple-go-service
</code></pre><p>Now for me I follow a pattern of storing the entry point in a <code>cmd</code> folder, this is done so I can easily customise the name of the binary as <code>go</code> uses the parent folder name executables.</p>
<pre tabindex="0"><code>mkdir -p cmd/simple-service
touch cmd/simple-service/main.go
</code></pre><p>Now add some code to the <code>main.go</code> you created in the previous command, this will listen on port <code>:8000</code> for web requests.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-go" data-lang="go"><span class="line"><span class="cl"><span class="kn">package</span><span class="w"> </span><span class="nx">main</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">import</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;io&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;log&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;net/http&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">main</span><span class="p">()</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="c1">// Hello world, the web server</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">helloHandler</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="kd">func</span><span class="p">(</span><span class="nx">w</span><span class="w"> </span><span class="nx">http</span><span class="p">.</span><span class="nx">ResponseWriter</span><span class="p">,</span><span class="w"> </span><span class="nx">req</span><span class="w"> </span><span class="o">*</span><span class="nx">http</span><span class="p">.</span><span class="nx">Request</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="nx">io</span><span class="p">.</span><span class="nf">WriteString</span><span class="p">(</span><span class="nx">w</span><span class="p">,</span><span class="w"> </span><span class="s">&#34;Hello, world!\n&#34;</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">http</span><span class="p">.</span><span class="nf">HandleFunc</span><span class="p">(</span><span class="s">&#34;/hello&#34;</span><span class="p">,</span><span class="w"> </span><span class="nx">helloHandler</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">log</span><span class="p">.</span><span class="nf">Println</span><span class="p">(</span><span class="s">&#34;Listing for requests at http://localhost:8000/hello&#34;</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">log</span><span class="p">.</span><span class="nf">Fatal</span><span class="p">(</span><span class="nx">http</span><span class="p">.</span><span class="nf">ListenAndServe</span><span class="p">(</span><span class="s">&#34;:8000&#34;</span><span class="p">,</span><span class="w"> </span><span class="kc">nil</span><span class="p">))</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>Now run this with the following command.</p>
<pre tabindex="0"><code>go run cmd/simple-service/main.go
</code></pre><p>This should print out a URL you can navigate to in your browser and see the classic <code>Hello, world!</code>.</p>
<p>Now from here you will want to setup an editor, I personnally use <a href="https://code.visualstudio.com/">vscode</a> which has really good support for golang once you add the <a href="https://github.com/Microsoft/vscode-go">go plugin</a>.</p>
<p>From here I would recommend looking at something like <a href="https://echo.labstack.com/">echo</a> which is a great web framework, it is well documented and has lots of great <a href="https://github.com/labstack/echox/tree/master/cookbook">examples</a>.</p>
<p>To add this library either just add the import in your editor, vscode will automatically trigger a download of libraries or run.</p>
<pre tabindex="0"><code>go get -u -v github.com/labstack/echo/v4
</code></pre><p><strong>NOTE:</strong> We are using the <code>v4</code> tagged import as we are using Go Modules, also I have also ensure this v4 tag is in the imports in the following example.</p>
<p>And update the <code>main.go</code> to use this great little REST crud example.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-go" data-lang="go"><span class="line"><span class="cl"><span class="kn">package</span><span class="w"> </span><span class="nx">main</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">import</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;net/http&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;strconv&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;github.com/labstack/echo/v4&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;github.com/labstack/echo/v4/middleware&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">type</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">user</span><span class="w"> </span><span class="kd">struct</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="nx">ID</span><span class="w">   </span><span class="kt">int</span><span class="w">    </span><span class="s">`json:&#34;id&#34;`</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="nx">Name</span><span class="w"> </span><span class="kt">string</span><span class="w"> </span><span class="s">`json:&#34;name&#34;`</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">var</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">users</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="kd">map</span><span class="p">[</span><span class="kt">int</span><span class="p">]</span><span class="o">*</span><span class="nx">user</span><span class="p">{}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">seq</span><span class="w">   </span><span class="p">=</span><span class="w"> </span><span class="mi">1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="cp">//----------</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c1">// Handlers</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="cp">//----------</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">createUser</span><span class="p">(</span><span class="nx">c</span><span class="w"> </span><span class="nx">echo</span><span class="p">.</span><span class="nx">Context</span><span class="p">)</span><span class="w"> </span><span class="kt">error</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">u</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="o">&amp;</span><span class="nx">user</span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="nx">ID</span><span class="p">:</span><span class="w"> </span><span class="nx">seq</span><span class="p">,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">if</span><span class="w"> </span><span class="nx">err</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nx">c</span><span class="p">.</span><span class="nf">Bind</span><span class="p">(</span><span class="nx">u</span><span class="p">);</span><span class="w"> </span><span class="nx">err</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="kc">nil</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="k">return</span><span class="w"> </span><span class="nx">err</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">users</span><span class="p">[</span><span class="nx">u</span><span class="p">.</span><span class="nx">ID</span><span class="p">]</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="nx">u</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">seq</span><span class="o">++</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">return</span><span class="w"> </span><span class="nx">c</span><span class="p">.</span><span class="nf">JSON</span><span class="p">(</span><span class="nx">http</span><span class="p">.</span><span class="nx">StatusCreated</span><span class="p">,</span><span class="w"> </span><span class="nx">u</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">getUser</span><span class="p">(</span><span class="nx">c</span><span class="w"> </span><span class="nx">echo</span><span class="p">.</span><span class="nx">Context</span><span class="p">)</span><span class="w"> </span><span class="kt">error</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">id</span><span class="p">,</span><span class="w"> </span><span class="nx">_</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nx">strconv</span><span class="p">.</span><span class="nf">Atoi</span><span class="p">(</span><span class="nx">c</span><span class="p">.</span><span class="nf">Param</span><span class="p">(</span><span class="s">&#34;id&#34;</span><span class="p">))</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">return</span><span class="w"> </span><span class="nx">c</span><span class="p">.</span><span class="nf">JSON</span><span class="p">(</span><span class="nx">http</span><span class="p">.</span><span class="nx">StatusOK</span><span class="p">,</span><span class="w"> </span><span class="nx">users</span><span class="p">[</span><span class="nx">id</span><span class="p">])</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">updateUser</span><span class="p">(</span><span class="nx">c</span><span class="w"> </span><span class="nx">echo</span><span class="p">.</span><span class="nx">Context</span><span class="p">)</span><span class="w"> </span><span class="kt">error</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">u</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nb">new</span><span class="p">(</span><span class="nx">user</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">if</span><span class="w"> </span><span class="nx">err</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nx">c</span><span class="p">.</span><span class="nf">Bind</span><span class="p">(</span><span class="nx">u</span><span class="p">);</span><span class="w"> </span><span class="nx">err</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="kc">nil</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="k">return</span><span class="w"> </span><span class="nx">err</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">id</span><span class="p">,</span><span class="w"> </span><span class="nx">_</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nx">strconv</span><span class="p">.</span><span class="nf">Atoi</span><span class="p">(</span><span class="nx">c</span><span class="p">.</span><span class="nf">Param</span><span class="p">(</span><span class="s">&#34;id&#34;</span><span class="p">))</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">users</span><span class="p">[</span><span class="nx">id</span><span class="p">].</span><span class="nx">Name</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="nx">u</span><span class="p">.</span><span class="nx">Name</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">return</span><span class="w"> </span><span class="nx">c</span><span class="p">.</span><span class="nf">JSON</span><span class="p">(</span><span class="nx">http</span><span class="p">.</span><span class="nx">StatusOK</span><span class="p">,</span><span class="w"> </span><span class="nx">users</span><span class="p">[</span><span class="nx">id</span><span class="p">])</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">deleteUser</span><span class="p">(</span><span class="nx">c</span><span class="w"> </span><span class="nx">echo</span><span class="p">.</span><span class="nx">Context</span><span class="p">)</span><span class="w"> </span><span class="kt">error</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">id</span><span class="p">,</span><span class="w"> </span><span class="nx">_</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nx">strconv</span><span class="p">.</span><span class="nf">Atoi</span><span class="p">(</span><span class="nx">c</span><span class="p">.</span><span class="nf">Param</span><span class="p">(</span><span class="s">&#34;id&#34;</span><span class="p">))</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nb">delete</span><span class="p">(</span><span class="nx">users</span><span class="p">,</span><span class="w"> </span><span class="nx">id</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">return</span><span class="w"> </span><span class="nx">c</span><span class="p">.</span><span class="nf">NoContent</span><span class="p">(</span><span class="nx">http</span><span class="p">.</span><span class="nx">StatusNoContent</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">main</span><span class="p">()</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">e</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nx">echo</span><span class="p">.</span><span class="nf">New</span><span class="p">()</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="c1">// Middleware</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">e</span><span class="p">.</span><span class="nf">Use</span><span class="p">(</span><span class="nx">middleware</span><span class="p">.</span><span class="nf">Logger</span><span class="p">())</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">e</span><span class="p">.</span><span class="nf">Use</span><span class="p">(</span><span class="nx">middleware</span><span class="p">.</span><span class="nf">Recover</span><span class="p">())</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="c1">// Routes</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">e</span><span class="p">.</span><span class="nf">POST</span><span class="p">(</span><span class="s">&#34;/users&#34;</span><span class="p">,</span><span class="w"> </span><span class="nx">createUser</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">e</span><span class="p">.</span><span class="nf">GET</span><span class="p">(</span><span class="s">&#34;/users/:id&#34;</span><span class="p">,</span><span class="w"> </span><span class="nx">getUser</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">e</span><span class="p">.</span><span class="nf">PUT</span><span class="p">(</span><span class="s">&#34;/users/:id&#34;</span><span class="p">,</span><span class="w"> </span><span class="nx">updateUser</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">e</span><span class="p">.</span><span class="nf">DELETE</span><span class="p">(</span><span class="s">&#34;/users/:id&#34;</span><span class="p">,</span><span class="w"> </span><span class="nx">deleteUser</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="c1">// Start server</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">e</span><span class="p">.</span><span class="nx">Logger</span><span class="p">.</span><span class="nf">Fatal</span><span class="p">(</span><span class="nx">e</span><span class="p">.</span><span class="nf">Start</span><span class="p">(</span><span class="s">&#34;:1323&#34;</span><span class="p">))</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>Now run this with the following command.</p>
<pre tabindex="0"><code>go run cmd/simple-service/main.go
</code></pre><p>Hopefully you have managed to get this service running and started testing it with something like <a href="https://www.postman.com/">postman</a> &#x1f389;.</p>
<p>For next steps I recommend reading <a href="https://www.wolfe.id.au/2020/03/10/how-do-i-structure-my-go-project/">How do I Structure my Go Project?</a>.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Building a WLToys A979 donkey car</title>
      <link>https://www.wolfe.id.au/2019/12/22/building-a-wltoys-a979-donkey-car/</link>
      <pubDate>Sun, 22 Dec 2019 15:37:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2019/12/22/building-a-wltoys-a979-donkey-car/</guid>
      <description>&lt;p&gt;In my spare time I spend a bit of time building and working on a scaled down self driving RC project which uses an opensource project called &lt;a href=&#34;https://github.com/autorope/donkeycar&#34;&gt;donkeycar&lt;/a&gt;. I have been through a few generations of car, learning how both the hardware and software worked, hopefully I can provide some tips on how to avoid at least some of my mistakes in this post.&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://www.wolfe.id.au/images/2019-12-22-building-a-custom-donkey-car_19.07.18.png&#34;/&gt; &lt;figcaption&gt;
            Current Car
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id=&#34;starting-out&#34;&gt;Starting Out&lt;/h3&gt;
&lt;p&gt;Probably the lion share of lessons, at least initially were learnt about how to setup a custom RC car, especially the power and drive train. I have put together a bunch of recommendations:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In my spare time I spend a bit of time building and working on a scaled down self driving RC project which uses an opensource project called <a href="https://github.com/autorope/donkeycar">donkeycar</a>. I have been through a few generations of car, learning how both the hardware and software worked, hopefully I can provide some tips on how to avoid at least some of my mistakes in this post.</p>
<figure>
    <img loading="lazy" src="/images/2019-12-22-building-a-custom-donkey-car_19.07.18.png"/> <figcaption>
            Current Car
        </figcaption>
</figure>

<h3 id="starting-out">Starting Out</h3>
<p>Probably the lion share of lessons, at least initially were learnt about how to setup a custom RC car, especially the power and drive train. I have put together a bunch of recommendations:</p>
<ol>
<li>Always buy at least two servos, and check the number of teeth, force, and dimensions carefully.</li>
<li>Learn to solder bullet connectors, there are some good tutorials on youtube including on how to get the nice shrink wrapped look.</li>
<li>Always use a seperate power supply for the computer on your car, there are a number of reasons you can cause voltage drops during operation of the car, keeping things isolated will avoid the computer crashing and a car running into a wall.</li>
<li>Cable ties are a life saver when hiding or stashing cables, buy a good collection of sizes.</li>
<li>Start out with brushed motors, they are cheap and work better at low speeds.</li>
</ol>
<h3 id="the-car-build">The Car Build</h3>
<p>For the base of my current build I chose the WLToys A979 4WD Truck, this was for a few reasons:</p>
<ol>
<li>Same base as the original <a href="https://aws.amazon.com/deepracer/">AWS Deepracer</a>.</li>
<li>Good solid build and lots of spares available online.</li>
<li>Short wheelbase and good turning circle so great for use indoors.</li>
<li>Cheap and easy to purchase and ship to Australia.</li>
<li>Has a brushed motor which is great at low speeds, which is very important indoors and while learning to drive.</li>
</ol>
<p>You can buy it from <a href="https://www.banggood.com/Wltoys-A979-118-2_4Gh-4WD-Monster-Truck-p-916960.html?rmmds=buy&amp;ID=229&amp;cur_warehouse=CN">banggood</a> for approx 90 AUD.</p>
<p>Then I replaced the <a href="https://en.wikipedia.org/wiki/Electronic_speed_control">Electronic Speed controller(ESC)</a> with a <a href="https://hobbyking.com/en_us/turnigy-20a-brushed-esc.html">Turnigy 20A BRUSHED ESC</a> as the one provided was integrated with the radio and servo using some sort of proprietary setup.</p>
<figure>
    <img loading="lazy" src="/images/2019-12-22-building-a-custom-donkey-car_16.59.49.png"/> <figcaption>
            Motor Wiring
        </figcaption>
</figure>

<p>Lastly I replaced the servo with a <a href="https://hobbyking.com/en_us/turnigytm-s3101s-servo-2-5kg-0-14sec-17g.html">Turnigy™ S3101S Servo 2.4kg / 0.14sec / 17g 24T</a>.</p>
<p>To connect all this stuff you will need some extra parts.</p>
<ul>
<li>Heat shrink such as <a href="https://hobbyking.com/en_us/turnigy-4mm-heat-shrink-tube-black-1mtr-1.html">Turnigy Heat Shrink Tube 4mm Black</a>.</li>
<li>Bullet connectors such as <a href="https://hobbyking.com/en_us/4mm-easy-solder-gold-connectors-10-pairs.html">4mm Easy Solder Gold Connectors (10 pairs)</a>.</li>
<li>XT60 connectors which are used to connect 2s batteries such as <a href="https://hobbyking.com/en_us/nylon-xt60-connectors-male-female-5-pairs-genuine.html">Nylon XT60 Connectors Male/Female (5 pairs)</a>.</li>
</ul>
<p>This will also be useful with any power upgrades such as the one listed down the bottom.</p>
<h3 id="the-compute-build">The Compute Build</h3>
<p>To mount the compute on top of the car I used <a href="https://github.com/PancakeLegend/Dingo-Mount">Dingo-Mount</a> which was cut on a laser cutter at <a href="https://www.hackmelbourne.org/">CCHS</a> using some 3mm ply wood, which is light and strong.</p>
<p>The hardware I use on top of the car is as follows:</p>
<ul>
<li><a href="https://developer.nvidia.com/embedded/jetson-nano-developer-kit">Jetson Nano</a> which you can buy from <a href="https://www.seeedstudio.com/NVIDIA-Jetson-Nano-Development-Kit-p-2916.html">seeed studio</a>.</li>
<li><a href="https://cplonline.com.au/intel-8265-ngwmg-dtx1-dual-band-wireless-ac-8265-desktop-kit.html">Intel 8265.NGWMG.DTX1 DUAL BAND Wireless-AC 8265 Desktop KIT</a>.</li>
<li><a href="https://www.raspberrypi.org/products/camera-module-v2/">Raspberry Pi Camera Module V2</a> also buy from from <a href="https://www.seeedstudio.com/Raspberry-Pi-Camera-Module-V2-p-2800.html">seeed studio</a>.</li>
<li>An SSD to store all your data on, and because running this machine off sdcard is very slow. I used a <a href="https://www.amazon.com.au/Crucial-BX500-240GB-NAND-2-5-inch/dp/B07G3YNLJB/ref=sr_1_4?keywords=ssd&#43;crucial&amp;qid=1576992516&amp;s=computers&amp;sr=1-4">Crucial BX500 240GB 3D NAND SATA 2.5-inch SSD</a>.</li>
<li>A USB to SATA cable such as <a href="https://www.amazon.com.au/gp/css/summary/edit.html/ref=dp_iou_view_this_order?ie=UTF8&amp;orderID=250-3610443-5463054">Onvian USB 3.0 To SATA 22 Pin 15+7 Pin 2.5 Inch Hard Disk Driver SSD Adapter Data Power Cable</a>.</li>
<li>A USB powerbank such as <a href="https://www.amazon.com.au/ROMOSS-20000mAh-Portable-External-Compatible/dp/B07H3RRZXT/ref=sr_1_4?keywords=USB&#43;Power&#43;bank&amp;qid=1576992675&amp;sr=8-4">18W Power Bank, ROMOSS 20000mAh Portable Charger</a>.</li>
</ul>
<p>I used some brass standoffs, like these <a href="https://www.amazon.com.au/Sutemribor-Female-Spacer-Standoff-Assortment/dp/B075K3QBMX/ref=sr_1_1?keywords=brass&#43;standoffs&amp;qid=1576992928&amp;sr=8-1">M2.5 Male Female Hex Brass Spacer Standoff</a> which are similar to the ones you get when you buy a computer case, this enabled me to mount the SSD under my jetson nano, with some rubber acting avoiding the bottom of the board shorting out on the SSD.</p>
<figure>
    <img loading="lazy" src="/images/2019-12-22-building-a-custom-donkey-car_17.00.46.png"/> <figcaption>
            SSD Mounting
        </figcaption>
</figure>

<p>To drive the servo and ESC I used <a href="https://learn.adafruit.com/16-channel-pwm-servo-driver?view=all">Adafruit PCA9685 16-Channel Servo Driver</a> which you can get from <a href="https://core-electronics.com.au/adafruit-16-channel-12-bit-pwm-servo-driver-i2c-interface-pca9685.html">Core Electronics</a> or purchase something similar from <a href="https://www.aliexpress.com/item/33047932849.html?spm=a2g0s.9042311.0.0.2c4f4c4d7mf9h7">aliexpress</a>.</p>
<figure>
    <img loading="lazy" src="/images/2019-12-22-building-a-custom-donkey-car_17.02.14.png"/> <figcaption>
            Camera and Servo Board
        </figcaption>
</figure>

<h3 id="power-options">Power Options</h3>
<p>If you want to use the full power of the Jetson Nano compute then you may need to get your self a SBEC such as <a href="https://hobbyking.com/en_us/turnigy-5a-8-40v-sbec-for-lipo.html">Turnigy 5A (8-40v) SBEC for Lipo</a> and rig up an extra 2S1P LIPO such as <a href="https://hobbyking.com/en_us/turnigy-nano-tech-2200mah-2s-25-50c-lipo-pack.html">Turnigy nano-tech 2200mah 2S 25~50C Lipo Pack</a>. I use this setup currently to avoid drop outs when doing inference with higher resolution images (224px * 224px). Attached to this is a 2.1mm Barrel Cable I had laying around.</p>
<figure>
    <img loading="lazy" src="/images/2019-12-22-building-a-custom-donkey-car_17.14.22.png"/> <figcaption>
            Power Loop with SBEC
        </figcaption>
</figure>

<h2 id="driving-demonstration">Driving Demonstration</h2>
<p>This is my car in auto pilot running around a tight track in the CCHS workshop.</p>
<iframe width="1236" height="695" src="https://www.youtube.com/embed/vrqaF1Nr2qg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>]]></content:encoded>
    </item>
    
    <item>
      <title>Getting started with Cognito?</title>
      <link>https://www.wolfe.id.au/2019/12/16/getting-started-with-cognito/</link>
      <pubDate>Mon, 16 Dec 2019 10:46:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2019/12/16/getting-started-with-cognito/</guid>
      <description>&lt;p&gt;The AWS &lt;a href=&#34;https://aws.amazon.com/cognito/&#34;&gt;Cognito&lt;/a&gt; product enables developers to build web or API based applications without worrying about authentication and authorisation.&lt;/p&gt;
&lt;p&gt;When setting up an applications authentication I try to keep in mind a few goals:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Keep my users data as safe as possible.&lt;/li&gt;
&lt;li&gt;Try and find something which is standards based, or supports integrating with standard protocols such as &lt;a href=&#34;https://openid.net/&#34;&gt;openid&lt;/a&gt;, &lt;a href=&#34;https://oauth.net/2/&#34;&gt;oauth2&lt;/a&gt; and &lt;a href=&#34;https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language&#34;&gt;SAML&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Evaluate the authentication flows I need and avoid increasing scope and risk.&lt;/li&gt;
&lt;li&gt;Try to use a service to start with, or secondarily, an opensource project with a good security process and a healthy community.&lt;/li&gt;
&lt;li&gt;Limit any custom development to extensions, rather than throwing out the baby with the bath water.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As you can probably tell, my primary goal is to keep authentication out of my applications, I really don&amp;rsquo;t have the time or inclination to manage a handcrafted authentication solution.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The AWS <a href="https://aws.amazon.com/cognito/">Cognito</a> product enables developers to build web or API based applications without worrying about authentication and authorisation.</p>
<p>When setting up an applications authentication I try to keep in mind a few goals:</p>
<ol>
<li>Keep my users data as safe as possible.</li>
<li>Try and find something which is standards based, or supports integrating with standard protocols such as <a href="https://openid.net/">openid</a>, <a href="https://oauth.net/2/">oauth2</a> and <a href="https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language">SAML</a>.</li>
<li>Evaluate the authentication flows I need and avoid increasing scope and risk.</li>
<li>Try to use a service to start with, or secondarily, an opensource project with a good security process and a healthy community.</li>
<li>Limit any custom development to extensions, rather than throwing out the baby with the bath water.</li>
</ol>
<p>As you can probably tell, my primary goal is to keep authentication out of my applications, I really don&rsquo;t have the time or inclination to manage a handcrafted authentication solution.</p>
<h3 id="what-does-aws-cognito-provide-out-of-the-box">What does AWS Cognito provide out of the box?</h3>
<p>Lets look at what we get out of the box:</p>
<ul>
<li>Storing and protecting your users data with features such as <a href="https://en.wikipedia.org/wiki/Secure_Remote_Password_protocol">Secure Remote Password Protocol (SRP)</a>.</li>
<li>Signing up for an account with email / sms verification</li>
<li>Signing in, optionally with Multi Factor Authentication (MFA)</li>
<li>Password change and recovery</li>
<li>A number of triggers which can be used to extend the product</li>
</ul>
<h3 id="what-is-great-about-cognito">What is great about Cognito?</h3>
<p>Where AWS Cognito really shines is:</p>
<ul>
<li>Out of the box <a href="https://aws.amazon.com/compliance/services-in-scope/">compliance to standards such as SOC and PCI</a></li>
<li>Integration with a range of <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-social-idp.html">platform identity providers</a>, such as Google, Apple and Amazon</li>
<li>Support for integration with identity providers (IdPs) using OpenID and SAML.</li>
<li>Really easy to integrate into your application using libraries such as <a href="https://github.com/aws-amplify/amplify-js">AmplifyJS</a>.</li>
<li>AWS is managing it for a minimal cost</li>
</ul>
<h3 id="what-is-not-so-great-about-cognito">What is not so great about Cognito?</h3>
<p>Where AWS Cognito can be a challenge for developers:</p>
<ul>
<li>Can be difficult to setup, and understand some of the settings which can only be updated during creation, changing these requires you to delete and recreate your pool.</li>
<li>Per account quotas on API calls</li>
<li>A lack of search</li>
<li>No inbuilt to backup and restore the user data in your pool</li>
</ul>
<p>So how do we address some of these challenges, while still getting the value provided and being able to capitalise on it&rsquo;s security and compliance features.</p>
<h3 id="what-is-the-best-way-to-setup-cognito">What is the best way to setup Cognito?</h3>
<p>To setup Cognito I typically use one of the many open source <a href="https://aws.amazon.com/cloudformation/">cloudformation</a> templates on <a href="https://github.com/">GitHub</a>. I crafted this template some time ago <a href="https://gist.github.com/wolfeidau/70531fc1a593c0bad7fb9ebc9ae82580">cognito.yml</a>, it supports login using <code>email</code> address, and domain white listing for sign ups.</p>
<p>As a follow on from this I built a serverless application <a href="https://github.com/wolfeidau/serverless-cognito-auth">serverless-cognito-auth</a> which encapsulates a lot of the standard functionality I use in applications.</p>
<p>You can also use <a href="https://docs.aws.amazon.com/aws-mobile/latest/developerguide/mobile-hub-features.html">AWS Mobile Hub</a> or <a href="https://aws.amazon.com/amplify/">AWS Amplify</a> to bootstrap a Cognito pool for you.</p>
<p>Overall recommendations are:</p>
<ol>
<li>If your new to Cognito and want things to just work then I recommend trying <a href="https://aws.amazon.com/amplify/">AWS Amplify</a>.</li>
<li>If you are an old hand and just want Cognito the way you like it, then use one of the many prebuilt templates.</li>
</ol>
<h3 id="how-do-i-avoid-quota-related-issues">How do I avoid quota related issues?</h3>
<p>Firstly I recommend familiarising yourself with the <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html">AWS Cognito Limits Page</a>.</p>
<p>I haven&rsquo;t seen an application hit request rate this more than a couple of times, and both those were related to UI bugs which continuously polled the Cognito API.</p>
<p>The one limit I have seen hit is the sign up emails per day limit, this can be a pain on launch days for apps, or when there is a spike in sign ups. If your planning to use Cognito in a startup you will need to integrate with <a href="https://aws.amazon.com/ses/">SES</a>.</p>
<h3 id="how-do-i-work-around-searching-my-user-database">How do I work around searching my user database?</h3>
<p>Out of the box cognito will only allow you to list and filter users by a list of common attributes, this doesn&rsquo;t include custom attributes, so if you add an attribute like customerId you won&rsquo;t be able to find all users with a given value.</p>
<p>This limitation makes it difficult to replace an internal database driven authentication library just using the cognito service, so for this reason I recommend adding a DynamoDB table to your application and integrating this with cognito using <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html">lambda triggers</a> to build your own global user store.</p>
<p>To simplify interacting with Cognito I wrote a CLI which provides some helpful commands to scan, filter, export and perform some common admin functions, you can find it at <a href="https://github.com/wolfeidau/cognito-cli">https://github.com/wolfeidau/cognito-cli</a>.</p>
<h3 id="how-do-i-back-up-my-user-pool">How do I back up my user pool?</h3>
<p>Backing up user accounts in any system is something you need to consider carefully as this information typically includes credentials as well as other sensitive data such as mobile number which is often used as a second factor for other services.</p>
<p>Currently Cognito doesn&rsquo;t provide a simple way of exporting user data, the service does however have an import function which will import users in from a CSV file.</p>
<p><strong>Note:</strong> AWS cognito doesn&rsquo;t support export user passwords, these will need to be reset after restore.</p>
<p>For some examples of tooling see <a href="https://www.npmjs.com/package/cognito-backup-restore">cognito-backup-restore</a>.</p>
<h1 id="conclusion">Conclusion</h1>
<p>If you really care about security and compliance then cognito is a great solution, it has some limitations, and gaps but these can be worked around if you want to focus your effort somewhere else.</p>
<p>Personally I think it is really important that as a developer I pick solutions which ensure my customers data is locked away as securely as possible, and ideally using a managed service.</p>
<p>You could totally roll your own authentication solution, and manage all the patching and challenges which go with that but that makes very little sense when you should probably be solving the original problem you had.</p>
<p>Authentication is a <a href="https://americanexpress.io/yak-shaving/">yak I am willing to let someone else <del>shave</del></a>  manage, and so should you, if not for your own sanity, then that of your users.</p>
<p>Lastly if your building out a web application use <a href="https://aws-amplify.github.io/amplify-js/api/">amplify-js</a>, this library makes it so easy to add Cognito authentication to your web application. I used it on <a href="https://github.com/wolfeidau/cognito-vue-bootstrap">cognito-vue-bootstrap</a> which you can also check out.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Why CDK?</title>
      <link>https://www.wolfe.id.au/2019/08/01/why-cdk/</link>
      <pubDate>Thu, 01 Aug 2019 10:46:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2019/08/01/why-cdk/</guid>
      <description>&lt;p&gt;Early this year amazon web services released the &lt;a href=&#34;https://aws.amazon.com/cdk/&#34;&gt;Cloud Development Kit (CDK)&lt;/a&gt; which is best summed up by a quote from the GitHub project.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Before I go recommending this new project to anyone I most certainly need to road test it myself. This post provides a bit of background on where I work, and why I am looking into CDK, and what I love to see in the future.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Early this year amazon web services released the <a href="https://aws.amazon.com/cdk/">Cloud Development Kit (CDK)</a> which is best summed up by a quote from the GitHub project.</p>
<blockquote>
<p>The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation.</p>
</blockquote>
<p>Before I go recommending this new project to anyone I most certainly need to road test it myself. This post provides a bit of background on where I work, and why I am looking into CDK, and what I love to see in the future.</p>
<h1 id="background">Background</h1>
<p>I have been building applications in <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> for a number of years using as many of the services as possible to keep things lean and online, that said it doesn’t come without some overhead and many lessons learned. While working in AWS I have always chosen to stick to native tools, such as <a href="https://aws.amazon.com/cloudformation/">Cloudformation</a>, augmented by a range of deployment tools, this has means I get all the power with the inherent complexity, which grows every <a href="https://reinvent.awsevents.com/">AWS reinvent</a> conference.</p>
<p>Hiring into an organization which works very closely with AWS comes with some challenges. New hires will typically find themselves learning a lot of new services, while also grappling with Cloudformation. This can really impact a new team members productivity, and more importantly their confidence, especially when the first few PR reviews call out security issues, and subtle pitfalls around resource naming the examples they build find on the internet.</p>
<p>For this reason I have been looking at ways to reduce risk of issues in production without falling into the trap of isolating infrastructure development to a small number of &ldquo;experts&rdquo;, this is why CDK popped up on my radar. It promises to allow developers to manage to assemble stacks using reusable patterns either developed by AWS, or internally using code not <a href="https://en.wikipedia.org/wiki/YAML">YAML</a>, which in my view is a big plus.</p>
<p>In short I care more about people than I care about technology, I want it to empower those who use it, not hold them back.</p>
<h1 id="road-testing">Road Testing</h1>
<p>As I was starting to road test CDK I was fortunate enough to catch up with some of my peers from <a href="https://aws.amazon.com/partners/ambassadors/">AWS Partner Community</a> and get some good tips and anecdotes on what to dig into. Based on this I have put together some points, these are grouped into the good, the bad and the ugly.</p>
<h1 id="the-good">The Good</h1>
<ul>
<li>CDK enables developers to describe their infrastructure in code using an object model, then lets them synthesize it into Cloudformation templates.
<ul>
<li>VPC resources can be “connected” to each other, this automatically creates the required security groups, and entries in them.</li>
<li>Accessing a secret value will also update <a href="https://aws.amazon.com/iam/">IAM</a> policies, updating roles with the required policy changes.</li>
</ul>
</li>
<li>CDK automatically creates policies that narrow access down to the least privileges required to operate based on your model. This is a boon as because it is one of the most complex and time consuming aspects of crafting a Cloudformation template.</li>
<li>The Cloudformation produced by CDK has sane defaults such as:
<ul>
<li>Enables deletion protection for <a href="https://aws.amazon.com/rds/">Relational Database Service (RDS)</a> instances to avoid accidental deletion during stack updates.</li>
<li>Enables orphaning of S3 buckets which leaves them behind when a stack update occurs, therefore avoiding deletion of all your data when messing with configuration of a resource in your stack.</li>
</ul>
</li>
<li>Includes patterns which incorporate a range of best practices, helpers and security enhancements.
<ul>
<li>An example of this is the <code>LoadBalancedFargateService</code> which can deploy a build and deploy a local service using a <code>Dockerfile</code> without ever having to delve into the finer points of Elastic Container Registry (ECR), Elastic Container Service (ECS) or Application Load balancers (ALB).</li>
</ul>
</li>
<li>Personally I feel a lot more productive with CDK, I am writing less code and producing more secure, consistent infrastructure.</li>
</ul>
<h1 id="the-bad">The Bad</h1>
<ul>
<li>Although amazing, the patterns feel like black boxes, there is no way to click through into the source code of an underlying pattern and dig into how it works.
<ul>
<li>Personally I think these should illustrate how amazing this model is, and act as a spring board into developing your own modules, currently it feels like a black box.</li>
<li>Yes I can clone and dig into repositories but the whole point of this is to be here for a good time, not a long time.</li>
</ul>
</li>
<li>It is really difficult to lock down the version of CDK in your NodeJS projects once a new release has come out. If there are changes I want to skip then I have to get a lock file from an older project, which breaks as soon as I add other CDK modules.
<ul>
<li>This is a less than ideal user experience for teams who aren’t moving as fast as the CDK development team.</li>
<li>Note work is happening to sort out semver usage in cdk packages <a href="https://github.com/aws/aws-cdk/issues/3711">issue #3711</a> which is great!</li>
</ul>
</li>
<li>The whole multi language, cross compiled thing seems very limited at the moment, especially around the lack of support for sharing modules developed in languages other than Typescript.
<ul>
<li>For more information on how CDK <em>deliver polyglot libraries</em> checkout the <a href="https://github.com/aws/jsii">aws/jsii project on GitHub</a></li>
<li>Some background on <a href="https://github.com/aws/aws-cdk/issues/972#issuecomment-502715577">Python</a> experience requiring NodeJS tooling.</li>
</ul>
</li>
</ul>
<h1 id="the-ugly">The Ugly</h1>
<ul>
<li>In the current CDK I am encouraged, if not required to synthesize my templates in every AWS account I use, this is a big red flag for me.
<ul>
<li>If team member updates a service deployed a couple of month after it’s initial release there is NO guarantee the same code Cloudformation will be generated. To cover this operators will need to “stash” or archive templates for every account, before every deploy.</li>
<li>The NPM locking issues around pinning upgrades really restricts your power to ensure managed changes to Cloudformation.</li>
</ul>
</li>
</ul>
<p>This lack of reusable, easily reproducible artifacts is a bit of a show stopper for me, given the number of times I have been let down by tools which generate Cloudformation, I am loath to leap back into it for a production environment.</p>
<h1 id="summary">Summary</h1>
<p>In short I will not be putting CDK between me and a production environment until some of the reproducible challenges are addressed. Like many of my peers, I have always advocated for solid, reproducible infrastructure tooling that is as simple as possible to recover and rollback.</p>
<p>That said I will most definitely be using CDK to quickly generate Cloudformation, especially generating IAM policies with least privileged, and harvesting some of the great ideas, and tricks from the patterns.</p>
<p>I would recommend using Typescript to develop CDK scripts, this will ensure you get the most reuse and enable harvesting directly from the CDK patterns!</p>
<h2 id="contributors">Contributors</h2>
<p>Thanks to <a href="https://twitter.com/hashishrajan">Ashish Rajan @hashishrajan</a> and <a href="https://twitter.com/elrowan">Rowan Udell @elrowan</a> for reviewing this post, <a href="https://twitter.com/iann0036">Ian Mckay @iann0036</a> for starting a impromptu CDK discussion in Seattle and <a href="https://twitter.com/aaronwalker">Aaron Walker @aaronwalker</a> for being a great sounding board and walking me through some of his experience with CDK.</p>
<h2 id="example-project">Example Project</h2>
<p>My current work with CDK is mainly focused on providing infrastructure to a container based application called exitus which is hosted on <a href="https://github.com/wolfeidau/exitus">GitHub</a> with the CDK infra code <a href="https://github.com/wolfeidau/exitus/blob/master/infra/exitus.ts">exitus.ts</a>.
Ian Mckay</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Serverless Background jobs part 2</title>
      <link>https://www.wolfe.id.au/2019/05/12/serverless-background-jobs-part-2/</link>
      <pubDate>Sun, 12 May 2019 12:00:47 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2019/05/12/serverless-background-jobs-part-2/</guid>
      <description>&lt;p&gt;Step Functions allow you to build pipelines involving one or more amazon, or external service. Some examples of this are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;complex customer on boarding processes jobs which provision resources then send a welcome email&lt;/li&gt;
&lt;li&gt;billing jobs where you may need wait for payment authorisation&lt;/li&gt;
&lt;li&gt;provisioning users and setup of any resources each user may need&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;pipeline&#34;&gt;pipeline&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;In software engineering, a pipeline consists of a chain of processing elements (processes, threads, coroutines, functions, etc.), arranged so that the output of each element is the input of the next; the name is by analogy to a physical pipeline.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Step Functions allow you to build pipelines involving one or more amazon, or external service. Some examples of this are:</p>
<ul>
<li>complex customer on boarding processes jobs which provision resources then send a welcome email</li>
<li>billing jobs where you may need wait for payment authorisation</li>
<li>provisioning users and setup of any resources each user may need</li>
</ul>
<h2 id="pipeline">pipeline</h2>
<blockquote>
<p>In software engineering, a pipeline consists of a chain of processing elements (processes, threads, coroutines, functions, etc.), arranged so that the output of each element is the input of the next; the name is by analogy to a physical pipeline.</p>
</blockquote>
<p>The term pipeline is used a lot in building of software, but can refer to any chain of tasks.</p>
<p>Over the last couple of years I have used Step Functions in a range of business applications, initially with mixed success due to service limitations and trying to fit complex &ldquo;new requirements&rdquo; into the model. Over time this changed as I better understand where step functions start and end.</p>
<p>I have put together a list of tips and recommendations for those using step functions.</p>
<h3 id="start-small">Start Small</h3>
<p>Practice with a few small workflows to get started, avoid building a <a href="https://en.wikipedia.org/wiki/Rube_Goldberg_machine">Rube Goldberg machine</a>. This means starting with something you already know and refactoring it to incorporate a step function, get used to tracing issues and make sure you have all the tools and experience to operate a serverless application.</p>
<h3 id="track-executions">Track Executions</h3>
<p>Include a correlation id in all flow execution payloads, this could be seeded from Amazon correlation id included with all API gateway calls. This correlation id may be used for reruns of the state machine so don&rsquo;t use it as the execution name.</p>
<h3 id="naming-things">Naming Things</h3>
<p>Execution name should include some hints to why the flow is running, with a unique id or timestamp appended.</p>
<p>Step names should clearly indicate what this step does as this will enable devs or operations identify where errors or mistakes are occurring.</p>
<h3 id="exception-handling">Exception Handling</h3>
<p>When using <a href="https://aws.amazon.com/lambda/">Lambda</a> functions make sure you use an exception tracker such as <a href="https://www.bugsnag.com/">bugsnag</a> or <a href="https://sentry.io/welcome/">sentry</a> to make fault finding easier. This allows you to track issues over time and avoids sifting through logs looking for errors.</p>
<p>Use the retry backoff built into each step to make your flows more robust.</p>
<h3 id="logging">Logging</h3>
<p>Emit structured logs with key start and end information and use <a href="https://aws.amazon.com/cloudwatch/">cloudwatch</a> to search capture metrics, and trigger alerts based on them.</p>
<h3 id="infrastructure-automation">Infrastructure Automation</h3>
<p>As an infrastructure engineer I use Step Functions to build and deploy a number of different applications, this is mainly where:</p>
<ol>
<li>The task happens often</li>
<li>Someone owns the infrastructure and integration is required to orchestrate with external systems</li>
<li>There are a lot of &ldquo;services&rdquo; of a similar shape which need to be deployed the same way</li>
</ol>
<p>When using cloudformation make sure you use change sets, this will allow you to:</p>
<ol>
<li>Print a nice list of what will change before performing a change or create.</li>
<li>Rollback in a nicer way</li>
</ol>
<p>When cloudformation changes fail try to collect the tail of the execution events to simplify fault finding.</p>
<p>When designing flows make sure they aren&rsquo;t too generic, their structure should reflect what your automating, similar to an <a href="https://www.ansible.com/">ansible</a> playbook.</p>
<p>Build up a task modules to interface with cloudformation, and whenever possible just use that with custom cloud formation tasks.</p>
<p>Build up a library of common tasks, which can be used by lambdas. Test these thoroughly using unit and integration tests.</p>
<p>Use common sense when managing common code, don&rsquo;t dump everything in there, keep to just the most important tasks. This just results in a teams or systems having a massive boat anchor holding back, and contributing to the fragility of the entire platform.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Serverless Background jobs part 1</title>
      <link>https://www.wolfe.id.au/2019/05/11/serverless-background-jobs-part-1/</link>
      <pubDate>Sat, 11 May 2019 12:00:47 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2019/05/11/serverless-background-jobs-part-1/</guid>
      <description>&lt;p&gt;Background jobs form the backbone of a lot of modern applications, they are used to perform a range of asynchronous tasks, from image processing through to order processing, fulfillment and shipping. Wherever there is a need to dispatch some sort of task, then monitor or wait for it&amp;rsquo;s result.&lt;/p&gt;
&lt;p&gt;In the serverless space AWS Step Functions play a similar role to projects such as &lt;a href=&#34;https://github.com/collectiveidea/delayed_job&#34;&gt;delayed job&lt;/a&gt; or &lt;a href=&#34;https://github.com/resque/resque&#34;&gt;resque&lt;/a&gt; in ruby, &lt;a href=&#34;http://www.celeryproject.org/&#34;&gt;celery&lt;/a&gt; in python, but with the following differences:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Background jobs form the backbone of a lot of modern applications, they are used to perform a range of asynchronous tasks, from image processing through to order processing, fulfillment and shipping. Wherever there is a need to dispatch some sort of task, then monitor or wait for it&rsquo;s result.</p>
<p>In the serverless space AWS Step Functions play a similar role to projects such as <a href="https://github.com/collectiveidea/delayed_job">delayed job</a> or <a href="https://github.com/resque/resque">resque</a> in ruby, <a href="http://www.celeryproject.org/">celery</a> in python, but with the following differences:</p>
<ul>
<li>Built on a flexible flow definition language called <a href="https://states-language.net/spec.html">Amazon States Language</a> which is written in JSON</li>
<li>Powered by lambda, with native integration to SNS, SQS, Kinesis and API Gateway</li>
<li>Fully Managed by AWS</li>
</ul>
<h1 id="step-functions">Step Functions?</h1>
<p>AWS Step Functions provides a way of executing flows you have defined, and provides a visual representation, like a CI pipeline, showing the current state of the execution.</p>
<p>A simple task management example which polls the status of a task, and reports completion status can be seen as follows:</p>
<figure>
    <img loading="lazy" src="/images/2019-05-11_stepfunction.png"/> <figcaption>
            Step Function Example
        </figcaption>
</figure>

<h1 id="why-step-functions">Why Step Functions?</h1>
<p>So this is great but why should we decompose our workflows into functions and glue them together using a managed service?</p>
<p>There are a number of things to be gained by moving to Step Functions:</p>
<ol>
<li>Testing, you will be able to test each element in the chain and make sure it performs it&rsquo;s discreet task.</li>
<li>Decoupling, you will have broken things down into pieces of code which can be refactored, or replaced independent of each other.</li>
<li>Monitoring, given the visual nature of these pipelines you will be able to zero in on failures faster.</li>
</ol>
<p>Step Functions aren&rsquo;t the answer to every problem, but for multi step, long running jobs they are a great solution, if your fluent in the AWS ecosystem.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Using AWS DeepRacer for ROS development</title>
      <link>https://www.wolfe.id.au/2018/12/22/using-aws-deepracer-for-ros-development/</link>
      <pubDate>Sat, 22 Dec 2018 16:33:47 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2018/12/22/using-aws-deepracer-for-ros-development/</guid>
      <description>&lt;p&gt;So the &lt;a href=&#34;https://aws.amazon.com/deepracer/&#34;&gt;DeepRacer&lt;/a&gt; was released with much fanfare by &lt;a href=&#34;https://aws.amazon.com&#34;&gt;Amazon Web Services&lt;/a&gt; (AWS) at this years Reinvent conference in Las Vegas. This combines an off the shelf radio control (RC) car chassis, with an intel based compute module which has been configured to control the throttle, steering of the car. DeepRacer is powered by &lt;a href=&#34;http://wiki.ros.org/ROS/Introduction&#34;&gt;Robot Operating System&lt;/a&gt; (ROS) as a framework which is used for the internal control systems making it a very interesting device for anyone getting started in robotics.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So the <a href="https://aws.amazon.com/deepracer/">DeepRacer</a> was released with much fanfare by <a href="https://aws.amazon.com">Amazon Web Services</a> (AWS) at this years Reinvent conference in Las Vegas. This combines an off the shelf radio control (RC) car chassis, with an intel based compute module which has been configured to control the throttle, steering of the car. DeepRacer is powered by <a href="http://wiki.ros.org/ROS/Introduction">Robot Operating System</a> (ROS) as a framework which is used for the internal control systems making it a very interesting device for anyone getting started in robotics.</p>
<p>In this post I will show how to use the DeepRacer for ROS development.</p>
<p>So why would I use DeepRacer for ROS development?</p>
<ol>
<li>Intel based so very easy to build and upload binaries / projects.</li>
<li>It works out of the box and is very well designed.</li>
<li>It will retail for 249 USD in march, given it&rsquo;s high quality components it will be very useful.</li>
<li>It is based on <a href="https://www.ubuntu.com/">Ubuntu</a> 16.04.3, and <a href="http://wiki.ros.org/kinetic">ROS Kinetic</a> which is a good solid starting point for ROS Development.</li>
</ol>
<p>Below is a picture the RC car chassis which is a really nice size.</p>
<figure>
    <img loading="lazy" src="/images/deepracer_rc_car.png"/> <figcaption>
            Base RC Car
        </figcaption>
</figure>

<h2 id="logging-into-your-deepracer">Logging into your DeepRacer</h2>
<p>Logging into the DeepRacer for the first time is very similar to the DeepLens product, plug the compute module into a monitor using a HDMI cable and a USB Keyboard and mouse, then reset the users password to some value which satisfies the complexity requirements. I believe it is at least 8 characters, one upper, lower, number and a symbol.</p>
<p><strong>Note:</strong> The username is <code>deepracer</code>.</p>
<h2 id="preparing-your-deepracer">Preparing your DeepRacer</h2>
<p>In my case I also disabled x11, and enabled SSH.</p>
<p>To permit ssh through the firewall configured on the device I ran, the server was already installed and started.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo ufw allow ssh
</span></span></code></pre></div><p>To disable the desktop environment on the next restart run the following.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo systemctl disable lightdm.service
</span></span></code></pre></div><p><strong>Note:</strong> I did this while the screen and keyboard where plugged in, then tested I could ssh to the DeepRacer from my laptop before restarting.</p>
<p>Now that you have ssh&rsquo;d into the DeepRacer using your login details, and the <code>TAG_HOSTNAME</code> is on a tag on the bottom of the car.</p>
<pre tabindex="0"><code>ssh deepracer@TAG_HOSTNAME
</code></pre><p>Before you starting to mess around on the host you probably want to disable ufw to enable access to all the services from your laptop/desktop.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo ufw disable
</span></span></code></pre></div><p>Once you have disabled the firewall your should be able to access the video stream from ROS <code>web_video_server</code> using <a href="http://TAG_HOSTNAME.local:8080/stream_viewer?topic=/video_mjpeg">http://TAG_HOSTNAME.local:8080/stream_viewer?topic=/video_mjpeg</a> where the <code>TAG_HOSTNAME</code> is on a tag on the bottom of the car. For more information on this service see <a href="http://wiki.ros.org/web_video_server">http://wiki.ros.org/web_video_server</a>.</p>
<h2 id="disable-ros-services">Disable ROS Services</h2>
<p>We are going to disable services which relate to the DeepRacer service, including the updates service so we can just use the control and media systems.</p>
<p>This is as simple as replacing <code>/opt/aws/deepracer/share/deepracer_launcher/launch/deepracer.launch</code> with content as follows, this simply comments out some of the AWS supplied services.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="cp">&lt;?xml version=&#34;1.0&#34;?&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;launch&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;node</span> <span class="na">pkg=</span><span class="s">&#34;web_video_server&#34;</span> <span class="na">type=</span><span class="s">&#34;web_video_server&#34;</span> <span class="na">name=</span><span class="s">&#34;web_video_server&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;param</span> <span class="na">name=</span><span class="s">&#34;ros_threads&#34;</span> <span class="na">value=</span><span class="s">&#34;1&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/node&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;node</span> <span class="na">name=</span><span class="s">&#34;servo_node&#34;</span> <span class="na">pkg=</span><span class="s">&#34;servo_pkg&#34;</span> <span class="na">type=</span><span class="s">&#34;servo_node&#34;</span> <span class="na">respawn=</span><span class="s">&#34;true&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;node</span> <span class="na">name=</span><span class="s">&#34;media_engine&#34;</span> <span class="na">pkg=</span><span class="s">&#34;media_pkg&#34;</span> <span class="na">type=</span><span class="s">&#34;media_node&#34;</span> <span class="na">respawn=</span><span class="s">&#34;true&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- &lt;node name=&#34;inference_engine&#34; pkg=&#34;inference_pkg&#34; type=&#34;inference_node&#34; respawn=&#34;true&#34; output=&#34;screen&#34;/&gt; --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- &lt;node name=&#34;inference_probability&#34; pkg=&#34;inference_pkg&#34; type=&#34;inference_probability.py&#34; respawn=&#34;true&#34;/&gt; --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- &lt;node name=&#34;model_optimizer&#34; pkg=&#34;inference_pkg&#34; type=&#34;model_optimizer_node.py&#34; respawn=&#34;true&#34; /&gt; --&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;node</span> <span class="na">name=</span><span class="s">&#34;control_node&#34;</span> <span class="na">pkg=</span><span class="s">&#34;ctrl_pkg&#34;</span> <span class="na">type=</span><span class="s">&#34;ctrl_node&#34;</span> <span class="na">respawn=</span><span class="s">&#34;true&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- &lt;node name=&#34;navigation_node&#34; pkg=&#34;ctrl_pkg&#34; type=&#34;navigation_node.py&#34; respawn=&#34;true&#34; /&gt; --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- &lt;node name=&#34;software_update&#34; pkg=&#34;software_update_pkg&#34; type=&#34;software_update_process.py&#34; respawn=&#34;true&#34; /&gt; --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- &lt;node name=&#34;webserver&#34; pkg=&#34;webserver_pkg&#34; type=&#34;webserver.py&#34; respawn=&#34;true&#34; /&gt; --&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/launch&gt;</span>
</span></span></code></pre></div><p>Now restart the DeepRacer service.</p>
<pre tabindex="0"><code>$ sudo systemctl restart deepracer-core.service
</code></pre><h2 id="is-this-thing-on">Is this thing on?</h2>
<p>To test whether or not we can still drive the DeepRacer around, we will explore then interface with the control node provided.</p>
<p>Now we load the ROS environment, using the AWS <code>setup.bash</code>, this will populate variables holding names and paths for services, if you have never used ROS before you may want to run through some of the <a href="http://wiki.ros.org/ROS/Tutorials">Tutorials</a>.</p>
<pre tabindex="0"><code>$ source /opt/aws/deepracer/setup.bash
</code></pre><p>Now we should just have the services we need to start working with ROS.</p>
<pre tabindex="0"><code>$ rosnode list
/control_node
/media_engine
/rosout
/servo_node
/web_video_server
</code></pre><p>Lets look at the topics which are now accepting messages.</p>
<pre tabindex="0"><code>$ rostopic list
/auto_drive
/calibration_drive
/manual_drive
/rosout
/rosout_agg
/video_mjpeg
</code></pre><p>Now we are interested in <code>/manual_drive</code> to test out the throttle and steering.</p>
<pre tabindex="0"><code>$ rostopic info /manual_drive
Type: ctrl_pkg/ServoCtrlMsg

Publishers: None

Subscribers: 
 * /control_node (http://amss-n4lp:37837/)
</code></pre><p>So we need to post a message to this topic, but we need to know the format so lets print it&rsquo;s structure.</p>
<pre tabindex="0"><code>$ rosmsg info ctrl_pkg/ServoCtrlMsg
float32 angle
float32 throttle
</code></pre><p>Now while the DeepRacer is off my desk, to stop it racing off, I run the following command which should trigger a throttle change. Note that the limit for this value is <code>0.7</code> by default, and <code>0</code> will stop it.</p>
<pre tabindex="0"><code>rostopic pub -1 /manual_drive ctrl_pkg/ServoCtrlMsg -- 0 0.3
</code></pre><p>Now we can stop the throttle by running.</p>
<pre tabindex="0"><code>rostopic pub -1 /manual_drive ctrl_pkg/ServoCtrlMsg -- 0 0
</code></pre><p>Likewise we can turn the DeepRacer to the left using the first value.</p>
<pre tabindex="0"><code>rostopic pub -1 /manual_drive ctrl_pkg/ServoCtrlMsg -- 0.9 0
</code></pre><p>And to the right.</p>
<pre tabindex="0"><code>rostopic pub -1 /manual_drive ctrl_pkg/ServoCtrlMsg -- -0.9 0
</code></pre><p>Then back to the center.</p>
<pre tabindex="0"><code>rostopic pub -1 /manual_drive ctrl_pkg/ServoCtrlMsg -- 0 0
</code></pre><p>I am really impressed with the DeepRacer so far, it is a great platform to start working with ROS at a great price, hopefully it spurs a whole raft of great robotics projects in the future. I would also love to see more detail, and hopefully source code for the services in this product as they seem to be really well thought out and could most certainly provide a spring board for future innovation.</p>
<p>In my next post we will write a ROS node which will use these services to drive the DeepRacer.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Diving into vgo from the Golang project</title>
      <link>https://www.wolfe.id.au/2018/03/01/diving-into-vgo-from-the-golang-project/</link>
      <pubDate>Thu, 01 Mar 2018 21:02:47 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2018/03/01/diving-into-vgo-from-the-golang-project/</guid>
      <description>&lt;p&gt;I have been looking into the &lt;a href=&#34;https://github.com/golang/vgo&#34;&gt;Versioned Go Command&lt;/a&gt; which was released recently by &lt;a href=&#34;https://twitter.com/_rsc&#34;&gt;Russ Cox&lt;/a&gt;. In summary this project is a pretty rethink of how golang retrieves and stores packages used to build applications, and more specifically how versioned modules are introduced while retaining reproducible builds.&lt;/p&gt;
&lt;p&gt;The highlights, and standout features for me are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Adds intrinsic support for versioning into the go command.&lt;/li&gt;
&lt;li&gt;Includes a few new sub commands such as &lt;code&gt;vendor&lt;/code&gt; and &lt;code&gt;verify&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Incorporates a lot of new ideas around the storage and management of golang modules, which seems to correlate to something akin to a github project. Note there is support for more than one module in a repository but the general idea is one.&lt;/li&gt;
&lt;li&gt;Adds a new mechanism to retrieve and cache modules in zip files, which will supersede the current source repository.&lt;/li&gt;
&lt;li&gt;Adds a new proxy mechanism, enabling organisations to provide a mediated, verified module server to developers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But probably the biggest change is the move away from the much maligned &lt;code&gt;$GOPATH&lt;/code&gt;, this will as far as I can tell be deprecated over time. Developers will create their projects outside of the &lt;code&gt;$GOPATH&lt;/code&gt;, using a file named &lt;code&gt;go.mod&lt;/code&gt; to provide a pointer to the projects namespace.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been looking into the <a href="https://github.com/golang/vgo">Versioned Go Command</a> which was released recently by <a href="https://twitter.com/_rsc">Russ Cox</a>. In summary this project is a pretty rethink of how golang retrieves and stores packages used to build applications, and more specifically how versioned modules are introduced while retaining reproducible builds.</p>
<p>The highlights, and standout features for me are as follows:</p>
<ul>
<li>Adds intrinsic support for versioning into the go command.</li>
<li>Includes a few new sub commands such as <code>vendor</code> and <code>verify</code></li>
<li>Incorporates a lot of new ideas around the storage and management of golang modules, which seems to correlate to something akin to a github project. Note there is support for more than one module in a repository but the general idea is one.</li>
<li>Adds a new mechanism to retrieve and cache modules in zip files, which will supersede the current source repository.</li>
<li>Adds a new proxy mechanism, enabling organisations to provide a mediated, verified module server to developers.</li>
</ul>
<p>But probably the biggest change is the move away from the much maligned <code>$GOPATH</code>, this will as far as I can tell be deprecated over time. Developers will create their projects outside of the <code>$GOPATH</code>, using a file named <code>go.mod</code> to provide a pointer to the projects namespace.</p>
<p>If your interested the full background take a look at <a href="https://research.swtch.com/vgo">Go &amp; Versioning</a>, it is quite a long read.</p>
<p>So what does this look like in practice?</p>
<p>To illustrate how <code>vgo</code> works in practice lets create a project outside my <code>$GOPATH</code>, in my case I create a folder called <code>~/Code/hacking/golang-lambda-func</code> and add a <code>main.go</code> containing code for a simple lambda API GW program. This code is copied directly from <a href="https://aws.amazon.com/blogs/compute/announcing-go-support-for-aws-lambda/">Announcing Go Support for AWS Lambda
</a>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-go" data-lang="go"><span class="line"><span class="cl"><span class="kn">package</span><span class="w"> </span><span class="nx">main</span><span class="w"> </span><span class="c1">// import &#34;github.com/wolfeidau/golang-lambda-func&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">import</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;log&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;github.com/pkg/errors&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;github.com/aws/aws-lambda-go/events&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="s">&#34;github.com/aws/aws-lambda-go/lambda&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">var</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="c1">// ErrNameNotProvided is thrown when a name is not provided</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">ErrNameNotProvided</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="nx">errors</span><span class="p">.</span><span class="nf">New</span><span class="p">(</span><span class="s">&#34;no name was provided in the HTTP body&#34;</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c1">// Handler is your Lambda function handler</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c1">// It uses Amazon API Gateway request/responses provided by the aws-lambda-go/events package,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c1">// However you could use other event sources (S3, Kinesis etc), or JSON-decoded primitive types such as &#39;string&#39;.</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">Handler</span><span class="p">(</span><span class="nx">request</span><span class="w"> </span><span class="nx">events</span><span class="p">.</span><span class="nx">APIGatewayProxyRequest</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="nx">events</span><span class="p">.</span><span class="nx">APIGatewayProxyResponse</span><span class="p">,</span><span class="w"> </span><span class="kt">error</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="c1">// stdout and stderr are sent to AWS CloudWatch Logs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">log</span><span class="p">.</span><span class="nf">Printf</span><span class="p">(</span><span class="s">&#34;Processing Lambda request %s\n&#34;</span><span class="p">,</span><span class="w"> </span><span class="nx">request</span><span class="p">.</span><span class="nx">RequestContext</span><span class="p">.</span><span class="nx">RequestID</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="c1">// If no name is provided in the HTTP request body, throw an error</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">if</span><span class="w"> </span><span class="nb">len</span><span class="p">(</span><span class="nx">request</span><span class="p">.</span><span class="nx">Body</span><span class="p">)</span><span class="w"> </span><span class="p">&lt;</span><span class="w"> </span><span class="mi">1</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="k">return</span><span class="w"> </span><span class="nx">events</span><span class="p">.</span><span class="nx">APIGatewayProxyResponse</span><span class="p">{},</span><span class="w"> </span><span class="nx">ErrNameNotProvided</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="k">return</span><span class="w"> </span><span class="nx">events</span><span class="p">.</span><span class="nx">APIGatewayProxyResponse</span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="nx">Body</span><span class="p">:</span><span class="w">       </span><span class="s">&#34;Hello &#34;</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="nx">request</span><span class="p">.</span><span class="nx">Body</span><span class="p">,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">		</span><span class="nx">StatusCode</span><span class="p">:</span><span class="w"> </span><span class="mi">200</span><span class="p">,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="p">},</span><span class="w"> </span><span class="kc">nil</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">main</span><span class="p">()</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">	</span><span class="nx">lambda</span><span class="p">.</span><span class="nf">Start</span><span class="p">(</span><span class="nx">Handler</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>Firstly note the addition of <code>// import &quot;github.com/wolfeidau/golang-lambda-func&quot;</code> indicates where our <code>module</code> is located, being the current equivalent of creating that structure in our <code>$GOPATH</code>.</p>
<p>This depends on a bunch a few projects hosted on github, which translate into modules in vgo. To build the project we run <code>vgo</code>. If you want to try this just run <code>go get -u golang.org/x/vgo</code> to install it.</p>
<pre tabindex="0"><code>$ vgo build
vgo: resolving import &#34;github.com/aws/aws-lambda-go/events&#34;
vgo: finding github.com/aws/aws-lambda-go (latest)
vgo: adding github.com/aws/aws-lambda-go v1.1.0
vgo: resolving import &#34;github.com/pkg/errors&#34;
vgo: finding github.com/pkg/errors (latest)
vgo: adding github.com/pkg/errors v0.8.0
vgo: finding github.com/pkg/errors v0.8.0
vgo: finding github.com/aws/aws-lambda-go v1.1.0
vgo: finding github.com/urfave/cli v1.20.0
vgo: downloading github.com/aws/aws-lambda-go v1.1.0
vgo: downloading github.com/pkg/errors v0.8.0
</code></pre><p>Once completed a <code>go.mod</code> file is created which stores our modules package, and it&rsquo;s dependencies. This content of this is as follows:</p>
<pre tabindex="0"><code>module &#34;github.com/wolfeidau/golang-lambda-func&#34;

require (
	&#34;github.com/aws/aws-lambda-go&#34; v1.1.0
	&#34;github.com/pkg/errors&#34; v0.8.0
)
</code></pre><p>Now what is also interesting is that <code>vgo</code> has created a cache in <code>$GOPATH/src/v</code> which looks something like. The following find command just lists the directories under this path.</p>
<pre tabindex="0"><code>$ find /Users/markw/go/src/v -type d
/Users/markw/go/src/v
/Users/markw/go/src/v/cache
/Users/markw/go/src/v/cache/github.com
/Users/markw/go/src/v/cache/github.com/urfave
/Users/markw/go/src/v/cache/github.com/urfave/cli
/Users/markw/go/src/v/cache/github.com/urfave/cli/@v
/Users/markw/go/src/v/cache/github.com/aws
/Users/markw/go/src/v/cache/github.com/aws/aws-lambda-go
/Users/markw/go/src/v/cache/github.com/aws/aws-lambda-go/@v
/Users/markw/go/src/v/cache/github.com/pkg
/Users/markw/go/src/v/cache/github.com/pkg/errors
/Users/markw/go/src/v/cache/github.com/pkg/errors/@v
/Users/markw/go/src/v/github.com
/Users/markw/go/src/v/github.com/aws
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0/cmd
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0/cmd/build-lambda-zip
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0/lambda
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0/lambda/messages
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0/lambdacontext
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0/events
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0/events/testdata
/Users/markw/go/src/v/github.com/aws/aws-lambda-go@v1.1.0/events/test
/Users/markw/go/src/v/github.com/pkg
/Users/markw/go/src/v/github.com/pkg/errors@v0.8.0
</code></pre><p>So what it has created is a versioned cache under <code>$GOPATH/src/v</code> with a few notible features:</p>
<ul>
<li>No <code>.git</code> directories, this code is cloned and stored without version information.</li>
<li>It has some interesting folders ending in <code>@v</code> which contain other files.</li>
</ul>
<pre tabindex="0"><code>$ ls -1 pkg/errors/@v
v0.8.0.info
v0.8.0.mod
v0.8.0.zip
v0.8.0.ziphash
</code></pre><p>So as mentioned above this is an example of the new compressed module packaging which is being introduced.</p>
<p>Well hopefully that shines a bit more light on what <code>vgo</code> does behind the scenes, and some of the notable differences between it and how the <code>go</code> command currently works. I really like where <code>vgo</code> is going, overall it feels like a really good preview of things to come in a post <code>$GOPATH</code> world. I will endeavour to dig into some other the new features as I discover more.</p>
<p>If your after a better explanation of how to get up and running with vgo make sure you watch <a href="https://www.gophersnacks.com/programs/using-vgo-for-go-dependency-management">Using vgo for Go Dependency Management</a> by <a href="https://twitter.com/bketelsen">@bketelsen</a>.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>AWS User Federation with Keycloak</title>
      <link>https://www.wolfe.id.au/2017/11/05/aws-user-federation-with-keycloak/</link>
      <pubDate>Sun, 05 Nov 2017 10:22:47 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2017/11/05/aws-user-federation-with-keycloak/</guid>
      <description>&lt;p&gt;As a user of Amazon Web Services (AWS) in large organisations I am always mindful of providing a mechanism to enable single sign on (SSO) to simplify the login process for users, enable strict controls for the organisation, and simplify on/off boarding for operations staff. As an advocate for open source I was happy find &lt;a href=&#34;http://www.keycloak.org/index.html&#34;&gt;Keycloak&lt;/a&gt;, which is developed by &lt;a href=&#34;https://www.redhat.com/en&#34;&gt;Redhat&lt;/a&gt; and is now an option for organisations looking for an open solution to identity federation with AWS.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As a user of Amazon Web Services (AWS) in large organisations I am always mindful of providing a mechanism to enable single sign on (SSO) to simplify the login process for users, enable strict controls for the organisation, and simplify on/off boarding for operations staff. As an advocate for open source I was happy find <a href="http://www.keycloak.org/index.html">Keycloak</a>, which is developed by <a href="https://www.redhat.com/en">Redhat</a> and is now an option for organisations looking for an open solution to identity federation with AWS.</p>
<p>This post will detail how I configured Keycloak with AWS SAML federation.</p>
<p>To demonstrate Keycloak I have setup a docker-compose project which can be cloned from <a href="https://github.com/wolfeidau/keycloak-docker-compose">https://github.com/wolfeidau/keycloak-docker-compose</a>.</p>
<p>Assuming you have <a href="https://www.docker.com/docker-mac">docker for mac</a> installed you should be able to navigate to the project then run.</p>
<pre tabindex="0"><code>docker-compose up -d
</code></pre><p>Then to ensure it is all working you should be able to navigate to http://0.0.0.0:18080/auth/admin/master/console/#/realms/master.</p>
<h3 id="setup-of-the-aws-saml-client">Setup of the AWS SAML Client</h3>
<p>To simplify the automated setup we can export a client configuration file containing the AWS SAML configuration, in my case I did this in the master realm then exported it.</p>
<p>First thing you need to do is download <a href="https://signin.aws.amazon.com/static/saml-metadata.xml">https://signin.aws.amazon.com/static/saml-metadata.xml</a>, just put it in your Downloads folder.</p>
<p>Once you login navigate to clients http://0.0.0.0:18080/auth/admin/master/console/#/realms/master/clients then hit the create button and import the <code>saml-metadata.xml</code> file, then hit save.</p>
<figure>
    <img loading="lazy" src="/images/2017-11-05_keycloak-create-aws-client.png"/> <figcaption>
            Keycloak AWS Client Creation
        </figcaption>
</figure>

<p>Now configure:</p>
<ul>
<li><strong>IDP Initiated SSO URL Name</strong> to <code>amazon-aws</code></li>
<li><strong>Base URL</strong> to <code>/auth/realms/wolfeidau/protocol/saml/clients/amazon-aws</code></li>
</ul>
<figure>
    <img loading="lazy" src="/images/2017-11-05_keycloak-configure-aws-client.png"/> <figcaption>
            Keycloak AWS Client Configuration
        </figcaption>
</figure>

<p>Lastly under the Scope tab disable Full Scope Allowed, this will ensure we only pass through the roles configured in our client to AWS.</p>
<figure>
    <img loading="lazy" src="/images/2017-11-05_keycloak-configure-aws-client-scopes.png"/> <figcaption>
            Keycloak AWS Client Scope Configuration
        </figcaption>
</figure>

<p>Now you can navigate back to http://0.0.0.0:18080/auth/admin/master/console/#/realms/master/clients and hit the export button next to the aws client.</p>
<h3 id="keycloak-setup-using-admin-cli">Keycloak Setup Using Admin CLI</h3>
<p>As a big proponent of automation I really wanted to illustrate, and indeed learn how to automate setup of keycloak, hence the CLI approach.</p>
<p>To get the tools we need for this guide download keycloak from <a href="http://www.keycloak.org/downloads.html">Keycloak Downloads</a> and extract this to say <code>$HOME/Development/keycloak</code> then add <code>$HOME/Development/keycloak/bin</code> to your <code>$PATH</code> as per <a href="http://www.keycloak.org/docs/latest/server_admin/topics/admin-cli.html">Keycloak administration CLI docs</a>.</p>
<pre tabindex="0"><code>export PATH=$PATH:$HOME/Development/keycloak/bin
</code></pre><p><strong>Note:</strong> Commands which create new objects generate a unique GUID which looks like <code>6c684579-51a1-4bdf-a694-d641199874d8</code>, you will need to adjust those values in the subsequent commands.</p>
<p>Now we can use the administration CLI program to configure our keycloak service.</p>
<p>To test it out and configure your account locally.</p>
<pre tabindex="0"><code>kcadm.sh config credentials --server http://0.0.0.0:18080/auth --realm master --user admin
</code></pre><p>Create a realm, in my case I am naming this <code>wolfeidau</code>.</p>
<pre tabindex="0"><code>$ kcadm.sh create realms -s realm=wolfeidau -s enabled=true
</code></pre><p>Import the keycloak client for AWS and add it to the <code>wolfeidau</code> realm we created, the JSON file is in the <code>keycloak-docker-compose</code> project.</p>
<pre tabindex="0"><code>$ kcadm.sh create clients -r wolfeidau -s clientId=&#34;urn:amazon:webservices&#34; -s enabled=true -f urn-amazon-webservices.json
Created new client with id &#39;6c684579-51a1-4bdf-a694-d641199874d8&#39;
</code></pre><p>Create our AWS role under the AWS client, note this is an example name you will need to replace 123456789012 with your account id.</p>
<pre tabindex="0"><code>kcadm.sh create clients/6c684579-51a1-4bdf-a694-d641199874d8/roles -r wolfeidau -s &#39;name=arn:aws:iam::123456789012:role/wolfeidau-admin,arn:aws:iam::123456789012:saml-provider/docker-keycloak&#39; -s &#39;description=AWS Administration Access&#39;
Created new role with id &#39;docker-keycloak&#39;
</code></pre><p>Create a group to grant AWS administration access.</p>
<pre tabindex="0"><code>$ kcadm.sh create groups -r wolfeidau -s name=aws-admins
Created new group with id &#39;dd02ed86-dd49-47c6-bd8a-5f74844b56d0&#39;
</code></pre><p>Add a role to the group, note this is an example name you will need to replace 123456789012 with your account id.</p>
<pre tabindex="0"><code>$ kcadm.sh add-roles -r wolfeidau --gname &#39;aws-admins&#39; --cclientid &#39;urn:amazon:webservices&#39;  --rolename &#39;arn:aws:iam::123456789012:role/wolfeidau-admin,arn:aws:iam::123456789012:saml-provider/docker-keycloak&#39;
</code></pre><p>Create a user for testing.</p>
<pre tabindex="0"><code>$ kcadm.sh create users -r wolfeidau -s username=wolfeidau -s email=mark@wolfe.id.au -s enabled=true
Created new user with id &#39;eb02cbfd-fa9c-4094-a437-3a218be53fe9&#39;
</code></pre><p>Reset the users password and require update on login.</p>
<pre tabindex="0"><code>$ kcadm.sh update users/eb02cbfd-fa9c-4094-a437-3a218be53fe9/reset-password -r wolfeidau -s type=password -s value=NEWPASSWORD -s temporary=true -n
</code></pre><p>Add the user to our AWS administration group.</p>
<pre tabindex="0"><code>$ kcadm.sh update users/eb02cbfd-fa9c-4094-a437-3a218be53fe9/groups/dd02ed86-dd49-47c6-bd8a-5f74844b56d0 -r wolfeidau -s realm=wolfeidau -s userId=eb02cbfd-fa9c-4094-a437-3a218be53fe9 -s groupId=dd02ed86-dd49-47c6-bd8a-5f74844b56d0 -n
</code></pre><p>Export the metadata file required by AWS to setup the SAML provider.</p>
<pre tabindex="0"><code>$ kcadm.sh get -r wolfeidau clients/6c684579-51a1-4bdf-a694-d641199874d8/installation/providers/saml-idp-descriptor &gt; client-tailored-saml-idp-metadata.xml
</code></pre><h3 id="aws-setup">AWS Setup</h3>
<p>Create the AWS SAML Provider in your account using the metadata file exported from keycloak.</p>
<pre tabindex="0"><code>aws iam create-saml-provider --saml-metadata-document file://client-tailored-saml-idp-metadata.xml --name docker-keycloak
</code></pre><p>Deploy the cloudformation template supplied in the <code>keycloak-docker-compose</code> project, this contains the SAML SSO IAM roles and saves clicking around in the UI.</p>
<pre tabindex="0"><code>aws cloudformation create-stack --capabilities CAPABILITY_IAM --stack-name sso-roles --template-body file://sso-roles-cfn.yaml
</code></pre><p><strong>Note:</strong> You can just create the saml provider and launch the cloudformation from the AWS console.</p>
<h3 id="logging-into-aws">Logging Into AWS</h3>
<p>Now you should be ready to log into AWS using keycloak using the link http://0.0.0.0:18080/auth/realms/wolfeidau/protocol/saml/clients/amazon-aws.</p>
<h3 id="command-line-saml-authentication">Command Line SAML Authentication</h3>
<p>To enable the use of SAML by command line tools such as ansible and the AWS CLI my colleagues and I developed <a href="https://github.com/Versent/saml2aws">saml2aws</a>.</p>
<h3 id="references">References</h3>
<ul>
<li><a href="http://www.keycloak.org/docs/latest/server_admin/topics/admin-cli.html">Keycloak Admin CLI Docs</a></li>
<li><a href="http://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_saml.html">Troubleshooting SAML 2.0 Federation with AWS</a></li>
<li><a href="https://stories.scandiweb.com/sign-in-to-amazon-aws-using-saml-protocol-and-keycloak-as-identity-provider-e3798387de99">Sign in to Amazon AWS using SAML protocol and Keycloak as Identity Provider</a></li>
</ul>
<!-- 
Import the file into the AWS console by navigating to [IAM Identity Providers](https://console.aws.amazon.com/iam/home?region=us-west-2#/providers), click on your provider which in my case named `docker-keycloak` and click on the Upload Metadata.

![IAM Identity Providers](/images/2017-11-05_keycloak-screen-iam-identity-providers.png) -->]]></content:encoded>
    </item>
    
    <item>
      <title>So you want to use DynamoDB?</title>
      <link>https://www.wolfe.id.au/2017/05/25/so-you-want-to-use-dynamodb/</link>
      <pubDate>Thu, 25 May 2017 22:22:47 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2017/05/25/so-you-want-to-use-dynamodb/</guid>
      <description>&lt;p&gt;Over the last few months I have been working on a project which uses &lt;a href=&#34;https://aws.amazon.com/dynamodb/&#34;&gt;DynamoDB&lt;/a&gt; almost exclusively for persistence, this has been a big challenge for everyone on the team. As a developer, most of us are comfortable using a Relational database management system (RDBMS) systems, so the move to a primitive key value store has been fun, but we have learnt a lot. To try and capture some of these learnings I have written this article, hopefully it will help those who embark on a similar journey.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the last few months I have been working on a project which uses <a href="https://aws.amazon.com/dynamodb/">DynamoDB</a> almost exclusively for persistence, this has been a big challenge for everyone on the team. As a developer, most of us are comfortable using a Relational database management system (RDBMS) systems, so the move to a primitive key value store has been fun, but we have learnt a lot. To try and capture some of these learnings I have written this article, hopefully it will help those who embark on a similar journey.</p>
<h2 id="why-dynamodb">Why DynamoDB?</h2>
<p>Some of the advantages DynamoDB offers:</p>
<ul>
<li>A Key/Value model where the values are any number of fields</li>
<li>Simplified data access</li>
<li>Low operational overhead</li>
<li>Cost, a well tuned DynamoDB costs a few dollars a month to operate</li>
</ul>
<p>As a developer getting starting with DynamoDB you need to know about:</p>
<ol>
<li>Eventual consistency, this is integral to how DynamoDB achieves it&rsquo;s cost, resilience and scalability. Simple things such as writing a record, then retrieving it straight afterwards require some logic to cope with records which aren&rsquo;t visible yet.</li>
<li>Performing <code>select * from Table</code> is not recommended when working with DynamoDB, this will trigger scan operations which are less efficient than other operations in DynamoDB. It should be only used on small tables.</li>
</ol>
<p>Using any key/value store can be tricky at first, especially if you’re used to relational databases. I have put together a list of recommendations and tips which will hopefully help those starting out with this product.</p>
<h2 id="retries">Retries</h2>
<p>When you insert data into DynamoDB not every shard will immediately see your data, an attempt to read the data from the table may not get the value your looking for. If your inserting a new row, then attempting to read immediately afterwards you may get an empty response.</p>
<p>To mitigate this you will need to implement retries, ideally with a back off to avoid exhausting your provisioned throughput.</p>
<p>Global secondary indexes (GSIs) further complicate this, as these are also updated eventually, in our experience even more eventually than the base table. Again be aware when inserting rows you want to access straight afterwards you may need to check if a row is present in the index with a similar retry.</p>
<h2 id="data-modelling">Data Modelling</h2>
<p>The first big thing you need to understand is that DynamoDB doesn&rsquo;t have relationships, in most cases it will be better to start by storing related data denormalised in a given record using the document feature of the client APIs. The reason we do this is it can be difficult keeping related data across tables in sync.</p>
<p>I recommend keeping everything in a single record for as long as you can.</p>
<h2 id="pagination">Pagination</h2>
<p>Although most of the clients provided by amazon have a concept of paging built in, this is really forward only, which makes building a classic paginated list quite a bit harder. This is best illustrated with some excerpts from the DynamoDB API.</p>
<p>Firstly <code>QueryInput</code> from the golang AWS SDK we have the ability to pass in an <code>ExclusiveStartKey</code>.</p>
<pre tabindex="0"><code>type QueryInput struct {
...
    // The primary key of the first item that this operation will evaluate. Use
    // the value that was returned for LastEvaluatedKey in the previous operation.
    //
    // The data type for ExclusiveStartKey must be String, Number or Binary. No
    // set data types are allowed.
    ExclusiveStartKey map[string]*AttributeValue `type:&#34;map&#34;`
...
}
</code></pre><p>And in the <code>QueryOutput</code> we have the <code>LastEvaluatedKey</code>.</p>
<pre tabindex="0"><code>type QueryOutput struct {
...
    // The primary key of the item where the operation stopped, inclusive of the
    // previous result set. Use this value to start a new operation, excluding this
    // value in the new request.
    //
    // If LastEvaluatedKey is empty, then the &#34;last page&#34; of results has been processed
    // and there is no more data to be retrieved.
    //
    // If LastEvaluatedKey is not empty, it does not necessarily mean that there
    // is more data in the result set. The only way to know when you have reached
    // the end of the result set is when LastEvaluatedKey is empty.
    LastEvaluatedKey map[string]*AttributeValue `type:&#34;map&#34;`
...
}
</code></pre><p>Given all we have are some keys, which may or may not be deleted it is very difficult to build a classic paged view.</p>
<p>So it has become clear to me we need to embrace a new strategy for displaying pages of results, luckily lots of others have run into this issue and the common pattern is to:</p>
<ol>
<li>Use infinite scrolling, similar to twitter and other social media sites.</li>
<li>Maintain the state in the client with a cache of pages which have previously been loaded.</li>
</ol>
<p>For more information on this see <a href="https://blog.codinghorror.com/the-end-of-pagination/">The End of Pagination</a>.</p>
<h2 id="sorting">Sorting</h2>
<p>Just a small note on sorting, in more complicated tables you will need to add indexes purely to sort data in a specific way.</p>
<p>Also within the AWS api sorting uses the rather interestingly named <code>ScanIndexForward</code>.</p>
<pre tabindex="0"><code>    // If ScanIndexForward is true, DynamoDB returns the results in the order in
    // which they are stored (by sort key value). This is the default behavior.
    // If ScanIndexForward is false, DynamoDB reads the results in reverse order
    // by sort key value, and then returns the results to the client.
    ScanIndexForward *bool `type:&#34;boolean&#34;`
</code></pre><h2 id="you-are-not-alone">You are not alone</h2>
<p>These challenges presented by DynamoDB not unique, NoSQL databases such as Riak, and Cassandra, share some of the same limitations, again to enable resilience and scalability. When searching for ideas, suggestions or strategies to deal with it you may find answers around these open source projects.</p>
<h2 id="in-closing">In Closing</h2>
<p>I think it is important to note that learning DynamoDB will broaden your horizons, and in some ways change the way you look at persistence, this in my view is a good thing.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>bootstrap an apex golang project</title>
      <link>https://www.wolfe.id.au/2016/08/13/bootstrap-an-apex-golang-project/</link>
      <pubDate>Sat, 13 Aug 2016 18:30:47 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2016/08/13/bootstrap-an-apex-golang-project/</guid>
      <description>&lt;p&gt;Lately I have been using &lt;a href=&#34;http://apex.run&#34;&gt;apex&lt;/a&gt; to build a side project, this tool stream lines building and deploying serverless applications using &lt;a href=&#34;https://aws.amazon.com/lambda/&#34;&gt;AWS Lambda&lt;/a&gt;. While working on this project I have helped others get started with &lt;a href=&#34;http://golang.org&#34;&gt;golang&lt;/a&gt; at the same time as apex.&lt;/p&gt;
&lt;p&gt;My general strategy for building apex applications is to build a standalone version of the functionality on my machine, typically in a way which makes the code reusable, then I import and use that in apex. This post will run through how I do this.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Lately I have been using <a href="http://apex.run">apex</a> to build a side project, this tool stream lines building and deploying serverless applications using <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>. While working on this project I have helped others get started with <a href="http://golang.org">golang</a> at the same time as apex.</p>
<p>My general strategy for building apex applications is to build a standalone version of the functionality on my machine, typically in a way which makes the code reusable, then I import and use that in apex. This post will run through how I do this.</p>
<p>Firstly you will need to setup golang, I have documented how <a href="https://www.wolfe.id.au/2016/08/12/bootstrap-a-golang-project/">bootstrap a golang project</a>.</p>
<p>Lets go ahead and make a project, this will hold our reusable code and a test command line tool. Note that <code>wolfeidau</code> needs to be changed to your github login.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">mkdir -p $GOPATH/src/github.com/wolfeidau/shorten
</span></span><span class="line"><span class="cl">cd !$
</span></span></code></pre></div><p>Now create a project folder and setup a sub folder for commands and create the main test program file.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">mkdir -p cmds/shorten
</span></span><span class="line"><span class="cl">touch cmds/shorten/main.go
</span></span></code></pre></div><p>Next we create a file in the base of project which will contain the reusable parts of our application. I normally use the project name for the first file I create and pull things out into other <code>.go</code> files as it grows.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">touch shorten.go shorten_test.go
</span></span></code></pre></div><p>Add a package declaration file and some code as follows.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-go" data-lang="go"><span class="line"><span class="cl"><span class="c1">// Package shorten contains utility functions for shortening a URL.</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">package</span><span class="w"> </span><span class="nx">shorten</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">import</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;fmt&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;math/rand&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">var</span><span class="w"> </span><span class="nx">letters</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="p">[]</span><span class="nb">rune</span><span class="p">(</span><span class="s">&#34;abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_-.&#34;</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">init</span><span class="p">()</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">rand</span><span class="p">.</span><span class="nf">Seed</span><span class="p">(</span><span class="nx">time</span><span class="p">.</span><span class="nf">Now</span><span class="p">().</span><span class="nf">UnixNano</span><span class="p">())</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">RandSeq</span><span class="p">(</span><span class="nx">n</span><span class="w"> </span><span class="kt">int</span><span class="p">)</span><span class="w"> </span><span class="kt">string</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">b</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nb">make</span><span class="p">([]</span><span class="kt">rune</span><span class="p">,</span><span class="w"> </span><span class="nx">n</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="k">for</span><span class="w"> </span><span class="nx">i</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="k">range</span><span class="w"> </span><span class="nx">b</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nx">b</span><span class="p">[</span><span class="nx">i</span><span class="p">]</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="nx">letters</span><span class="p">[</span><span class="nx">rand</span><span class="p">.</span><span class="nf">Intn</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="nx">letters</span><span class="p">))]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="k">return</span><span class="w"> </span><span class="nb">string</span><span class="p">(</span><span class="nx">b</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>Now copy the following test into the <code>shorten_test.go</code> file.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-go" data-lang="go"><span class="line"><span class="cl"><span class="kn">package</span><span class="w"> </span><span class="nx">shorten</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">import</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;testing&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;github.com/stretchr/testify/assert&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">TestRandSeq</span><span class="p">(</span><span class="nx">t</span><span class="w"> </span><span class="o">*</span><span class="nx">testing</span><span class="p">.</span><span class="nx">T</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">v</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nf">RandSeq</span><span class="p">(</span><span class="mi">12</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">assert</span><span class="p">.</span><span class="nf">Equal</span><span class="p">(</span><span class="nx">t</span><span class="p">,</span><span class="w"> </span><span class="mi">12</span><span class="p">,</span><span class="w"> </span><span class="nb">len</span><span class="p">(</span><span class="nx">v</span><span class="p">))</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>Now lets add some code to <code>cmds/shorten/main.go</code> to call this function. I am going to use <a href="https://github.com/alecthomas/kingpin">github.com/alecthomas/kingpin</a> to manage parsing command line flags.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-go" data-lang="go"><span class="line"><span class="cl"><span class="kn">package</span><span class="w"> </span><span class="nx">main</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">import</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="s">&#34;fmt&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="s">&#34;gopkg.in/alecthomas/kingpin.v2&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="s">&#34;github.com/wolfeidau/shorten&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">var</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nx">length</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="nx">kingpin</span><span class="p">.</span><span class="nf">Arg</span><span class="p">(</span><span class="s">&#34;length&#34;</span><span class="p">,</span><span class="w"> </span><span class="s">&#34;Length of random string&#34;</span><span class="p">).</span><span class="nf">Int</span><span class="p">()</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">main</span><span class="p">()</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nx">kingpin</span><span class="p">.</span><span class="nf">Version</span><span class="p">(</span><span class="s">&#34;0.0.1&#34;</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nx">kingpin</span><span class="p">.</span><span class="nf">Parse</span><span class="p">()</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nx">fmt</span><span class="p">.</span><span class="nf">Println</span><span class="p">(</span><span class="s">&#34;Random String:&#34;</span><span class="p">,</span><span class="w"> </span><span class="nx">shorten</span><span class="p">.</span><span class="nf">RandSeq</span><span class="p">(</span><span class="o">*</span><span class="nx">length</span><span class="p">))</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>Pull your depdencies into the local <code>GOPATH</code> using <code>go get</code> and run the test.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">go get -uv ./...
</span></span><span class="line"><span class="cl">go test -v ./...
</span></span></code></pre></div><p>Test output should be as follows.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">=== RUN   TestRandSeq
</span></span><span class="line"><span class="cl">--- PASS: TestRandSeq (0.00s)
</span></span><span class="line"><span class="cl">PASS
</span></span><span class="line"><span class="cl">ok      github.com/wolfeidau/shorten    0.011s
</span></span><span class="line"><span class="cl">?       github.com/wolfeidau/shorten/cmd/shorten    [no test files]
</span></span></code></pre></div><p>Run the test program.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">go run cmd/shorten/main.go 23
</span></span></code></pre></div><p>With the response something like.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">Random String: YFynJVJYSJqsFejIGWGbEPF
</span></span></code></pre></div><p>Now we can build an apex project using the same method.</p>
<pre tabindex="0"><code>cd $GOPATH/src/github.com/wolfeidau
mkdir shorten-apex
cd !$
</code></pre><p>Next we initialise the apex project and remove the example function. Note this will setup an environment in AWS Sydney region using your default AWS profile.</p>
<pre tabindex="0"><code>apex init --region ap-southeast-2
rm -rf functions/hello
</code></pre><p>Create a function and go file.</p>
<pre tabindex="0"><code>mkdir functions/shorten
touch functions/shorten/main.go
echo &#39;*.go&#39; &gt;&gt; functions/shorten/.apexignore
</code></pre><p>Copy the following code into your <code>main.go</code>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-go" data-lang="go"><span class="line"><span class="cl"><span class="kn">package</span><span class="w"> </span><span class="nx">main</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">import</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;encoding/json&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;fmt&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;net/url&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;time&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;github.com/apex/go-apex&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="s">&#34;github.com/wolfeidau/shorten&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">const</span><span class="w"> </span><span class="p">(</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">domain</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="s">&#34;https://s.example.com/&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c1">// ~83733937890625 should be enough random values</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c1">// this assumes 55^8</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">length</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="mi">8</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">type</span><span class="w"> </span><span class="nx">message</span><span class="w"> </span><span class="kd">struct</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">ShortURL</span><span class="w">  </span><span class="kt">string</span><span class="w"> </span><span class="s">`json:&#34;shortUrl&#34;`</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">URL</span><span class="w">       </span><span class="kt">string</span><span class="w"> </span><span class="s">`json:&#34;url&#34;`</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">Timestamp</span><span class="w"> </span><span class="kt">int64</span><span class="w">  </span><span class="s">`json:&#34;timestamp&#34;`</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">main</span><span class="p">()</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">apex</span><span class="p">.</span><span class="nf">HandleFunc</span><span class="p">(</span><span class="kd">func</span><span class="p">(</span><span class="nx">event</span><span class="w"> </span><span class="nx">json</span><span class="p">.</span><span class="nx">RawMessage</span><span class="p">,</span><span class="w"> </span><span class="nx">ctx</span><span class="w"> </span><span class="o">*</span><span class="nx">apex</span><span class="p">.</span><span class="nx">Context</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="kd">interface</span><span class="p">{},</span><span class="w"> </span><span class="kt">error</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="kd">var</span><span class="w"> </span><span class="nx">m</span><span class="w"> </span><span class="nx">message</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="k">if</span><span class="w"> </span><span class="nx">err</span><span class="w"> </span><span class="o">:=</span><span class="w"> </span><span class="nx">json</span><span class="p">.</span><span class="nf">Unmarshal</span><span class="p">(</span><span class="nx">event</span><span class="p">,</span><span class="w"> </span><span class="o">&amp;</span><span class="nx">m</span><span class="p">);</span><span class="w"> </span><span class="nx">err</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="kc">nil</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="k">return</span><span class="w"> </span><span class="kc">nil</span><span class="p">,</span><span class="w"> </span><span class="nx">err</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="p">}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nx">m</span><span class="p">.</span><span class="nx">ShortURL</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="nx">domain</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="nx">shorten</span><span class="p">.</span><span class="nf">RandSeq</span><span class="p">(</span><span class="nx">length</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nx">m</span><span class="p">.</span><span class="nx">Timestamp</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="nx">time</span><span class="p">.</span><span class="nf">Now</span><span class="p">().</span><span class="nf">UnixNano</span><span class="p">()</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="k">return</span><span class="w"> </span><span class="nx">m</span><span class="p">,</span><span class="w"> </span><span class="kc">nil</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="p">})</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>Now you can deploy your application.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">apex deploy --region ap-southeast-2
</span></span></code></pre></div><p>To test out the deployed application create a sample <code>event.json</code> in the project directory containing the following JSON.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span><span class="nt">&#34;url&#34;</span><span class="p">:</span> <span class="s2">&#34;https://www.wolfe.id.au/2016/08/12/bootstrap-an-apex-golang-project/&#34;</span><span class="p">}</span>
</span></span></code></pre></div><p>Then invoke your function.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">apex invoke shorten --region ap-southeast-2 &lt; event.json
</span></span></code></pre></div><p>You now have a modular lambda function deployed using apex and golang 🎉🚀.</p>
<p>The sample projects are located at:</p>
<ul>
<li><a href="https://github.com/wolfeidau/shorten-apex">https://github.com/wolfeidau/shorten-apex</a></li>
<li><a href="https://github.com/wolfeidau/shorten">https://github.com/wolfeidau/shorten</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>bootstrap a golang project</title>
      <link>https://www.wolfe.id.au/2016/08/12/bootstrap-a-golang-project/</link>
      <pubDate>Fri, 12 Aug 2016 03:38:17 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2016/08/12/bootstrap-a-golang-project/</guid>
      <description>&lt;p&gt;I have been helping a few people get up and running with &lt;a href=&#34;http://golang.org&#34;&gt;golang&lt;/a&gt; lately and thought it was about time to post a brief getting started. This is primarily for OSX as this is what most of my colleagues use.&lt;/p&gt;
&lt;p&gt;Firstly you will need to install golang and setup your GOPATH. If your on OSX you can just install &lt;a href=&#34;http://brew.sh/&#34;&gt;homebrew&lt;/a&gt; and use it to install golang.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;brew install go
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Then in OSX I append a couple of lines to my &lt;code&gt;$HOME/.bash_profile&lt;/code&gt; and source the file to update my current environment. On Linux you typically modify your .bashrc.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been helping a few people get up and running with <a href="http://golang.org">golang</a> lately and thought it was about time to post a brief getting started. This is primarily for OSX as this is what most of my colleagues use.</p>
<p>Firstly you will need to install golang and setup your GOPATH. If your on OSX you can just install <a href="http://brew.sh/">homebrew</a> and use it to install golang.</p>
<pre tabindex="0"><code>brew install go
</code></pre><p>Then in OSX I append a couple of lines to my <code>$HOME/.bash_profile</code> and source the file to update my current environment. On Linux you typically modify your .bashrc.</p>
<pre tabindex="0"><code>echo &#39;export GOPATH=$HOME/Code/go&#39; &gt;&gt; ~/.bash_profile
echo &#39;export PATH=$PATH:$GOPATH/bin&#39; &gt;&gt; ~/.bash_profile
source ~/.bash_profile
</code></pre><p>Now <code>go env</code> should look like.</p>
<pre tabindex="0"><code>go env | grep GOPATH
GOPATH=&#34;/Users/markw/Code/go&#34;
</code></pre><p>We can now bootstrap your workspace to do this we are going to create a tree of folders which match the URL of your <a href="https://github.com">github</a> projects. Note that you will need to change <code>name_here</code> to your github username.</p>
<pre tabindex="0"><code>mkdir -p $GOPATH/src/github.com/name_here
cd $GOPATH/src/github.com/name_here
</code></pre><p>Now in our workspace create a project folder and setup a sub folder for commands.</p>
<pre tabindex="0"><code>mkdir -p testproject/cmds/testproject
cd testproject
</code></pre><p>The <code>cmds</code> folder enables us to easily override the names of the golang applications we build, these inherit the name of the parent folder. So rather than just having a <code>testproject</code> command, I can make sub folders in the cmds directory for testproject-ui, testproject-server and so on.</p>
<p>Lets add a main entry point file to <code>cmds</code>.</p>
<pre tabindex="0"><code>touch testproject/cmds/testproject/main.go
</code></pre><p>Add the following code to our file using an editor.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-go" data-lang="go"><span class="line"><span class="cl"><span class="kn">package</span><span class="w"> </span><span class="nx">main</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kn">import</span><span class="w"> </span><span class="nx">fmt</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="kd">func</span><span class="w"> </span><span class="nf">main</span><span class="p">()</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nx">fmt</span><span class="p">.</span><span class="nf">Println</span><span class="p">(</span><span class="s">&#34;Hello World!&#34;</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>Now you can create a <code>README.md</code> and init your git project.</p>
<pre tabindex="0"><code>touch README.md .gitignore
git init
</code></pre><p>Next steps I recommend getting yourself setup with an editor such as <a href="https://www.sublimetext.com/3">sublime text</a> which I documented here <a href="https://www.wolfe.id.au/2015/03/05/using-sublime-text-for-go-development/">Using Sublime Text for Go Development</a>.</p>
<p>Once you have an editor you can dig into:</p>
<ul>
<li><a href="https://golang.org/doc/code.html">How to Write Go Code</a></li>
<li><a href="https://www.golang-book.com/">Go Resources</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Development with Webpack and Docker</title>
      <link>https://www.wolfe.id.au/2015/08/08/development-with-webpack-and-docker/</link>
      <pubDate>Sat, 08 Aug 2015 17:56:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2015/08/08/development-with-webpack-and-docker/</guid>
      <description>&lt;p&gt;This describes how to develop front-end projects with &lt;a href=&#34;https://webpack.github.io/&#34;&gt;webpack&lt;/a&gt; inside a &lt;a href=&#34;docker.com&#34;&gt;docker&lt;/a&gt; container using &lt;a href=&#34;http://boot2docker.io/&#34;&gt;boot2docker&lt;/a&gt; on OSX.&lt;/p&gt;
&lt;h2 id=&#34;so-firstly-why-would-we-even-do-this&#34;&gt;So firstly why would we even do this?&lt;/h2&gt;
&lt;p&gt;The main aim of using docker for development is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Portable build environment&lt;/li&gt;
&lt;li&gt;Simplified on-boarding of new developers&lt;/li&gt;
&lt;li&gt;Consistency between development and continuous integration (CI)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In summary tools like docker make it very easy to package up a development environment and share it among a team of developers.&lt;/p&gt;
&lt;h2 id=&#34;sharing-folders&#34;&gt;Sharing folders&lt;/h2&gt;
&lt;p&gt;Within this setup we want the developer to choose what and how they code, on OSX
we need an environment where they can just keep using their current editor.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This describes how to develop front-end projects with <a href="https://webpack.github.io/">webpack</a> inside a <a href="docker.com">docker</a> container using <a href="http://boot2docker.io/">boot2docker</a> on OSX.</p>
<h2 id="so-firstly-why-would-we-even-do-this">So firstly why would we even do this?</h2>
<p>The main aim of using docker for development is:</p>
<ol>
<li>Portable build environment</li>
<li>Simplified on-boarding of new developers</li>
<li>Consistency between development and continuous integration (CI)</li>
</ol>
<p>In summary tools like docker make it very easy to package up a development environment and share it among a team of developers.</p>
<h2 id="sharing-folders">Sharing folders</h2>
<p>Within this setup we want the developer to choose what and how they code, on OSX
we need an environment where they can just keep using their current editor.</p>
<p>The <code>boot2docker</code> service configures a shared folder which mounts <code>/Users</code> folder on OSX within the virtual machine it provisions which easy to mount your project all the way through from OSX to your docker container. For more information on how this works see <a href="https://github.com/boot2docker/boot2docker/blob/master/README.md#virtualbox-guest-additions">VirtualBox Guest Additions</a>.</p>
<h1 id="configuration">Configuration</h1>
<p>I typically start my webpack projects using <a href="https://github.com/macropodhq/webpack-skel">macropodhq/webpack-skel</a> by the people at <a href="https://macropod.com/">Macropod</a>.</p>
<p>Once i have my project setup I add the following fragment to the webpack configuration.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-js" data-lang="js"><span class="line"><span class="cl">  <span class="nx">watchOptions</span><span class="o">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nx">poll</span><span class="o">:</span> <span class="mi">1000</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nx">aggregateTimeout</span><span class="o">:</span> <span class="mi">1000</span>
</span></span><span class="line"><span class="cl">  <span class="p">},</span>
</span></span></code></pre></div><p>Add a <code>Dockerfile</code> to the project which uses the official iojs images, note I am using 2.x as 3.x is still having issues with some native modules.</p>
<pre tabindex="0"><code>FROM iojs:2

# This will cd to the project root when docker starts in bash
CMD sh -c &#34;cd ${PROJECT_PATH:-/}; exec /bin/bash&#34;
</code></pre><p>Then build your docker container.</p>
<pre tabindex="0"><code>docker build -t iojsfsnotify .
</code></pre><p>Start the container passing in your current working directory as the path you want to change directory to when the container starts.</p>
<pre tabindex="0"><code>docker run -it -e PROJECT_PATH=$(pwd) -e DOCKER_IP=$(boot2docker ip) \
  -v &#34;/Users:/Users&#34; -p 8080:8080 -t iojsfsnotify
</code></pre><p>Then you can install your node modules.</p>
<pre tabindex="0"><code>npm install
</code></pre><p>Start the <code>webpack-dev-server</code>.</p>
<pre tabindex="0"><code>npm start
</code></pre><p>If you want to use the inline live reload mode you will need to use my fork of <a href="https://github.com/wolfeidau/webpack-dev-server">webpack-dev-server</a> for the moment. I would love to get this change merged but I am guessing Tobias Koppers is pretty busy given how much of a runaway success webpack has been.</p>
<p>Below is the fragment from my package.json which points to my fork.</p>
<pre tabindex="0"><code>    &#34;webpack-dev-server&#34;: &#34;wolfeidau/webpack-dev-server&#34;
</code></pre><p>A full example of this project is located at <a href="https://github.com/wolfeidau/webpack-docker-example">wolfeidau/webpack-docker-example</a>.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>IoT Development with Docker Containers</title>
      <link>https://www.wolfe.id.au/2015/07/06/iot-development-with-docker-containers/</link>
      <pubDate>Mon, 06 Jul 2015 17:56:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2015/07/06/iot-development-with-docker-containers/</guid>
      <description>&lt;p&gt;With the almost continuous release of new Internet of Things (IoT) hardware platforms and development boards it is not surprising that SDK delivery has seen a shift to piecemeal and &amp;ldquo;some assembly required&amp;rdquo; solutions. The majority of hardware companies have trouble delivering Software Development Kits (SDKs) which just work.&lt;/p&gt;
&lt;p&gt;Docker presents an opportunity to really make a big leap forward in providing a simple to deploy packaged SDK environments for hardware platforms. It is the first portable package format which works across operating systems, with consistent tooling and a mechanism to share changes with others.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With the almost continuous release of new Internet of Things (IoT) hardware platforms and development boards it is not surprising that SDK delivery has seen a shift to piecemeal and &ldquo;some assembly required&rdquo; solutions. The majority of hardware companies have trouble delivering Software Development Kits (SDKs) which just work.</p>
<p>Docker presents an opportunity to really make a big leap forward in providing a simple to deploy packaged SDK environments for hardware platforms. It is the first portable package format which works across operating systems, with consistent tooling and a mechanism to share changes with others.</p>
<p>For a more in depth run down on Docker and IoT see <a href="http://www.ibm.com/developerworks/mobile/library/iot-docker-containers/index.html">Rapidly develop Internet of Things apps with Docker Containers</a> by <a href="https://twitter.com/AnnaGerber">@AnnaGerber</a></p>
<h2 id="so-how-would-this-work">So how would this work?</h2>
<p>To illustrate how docker simplifies delivery of these environments lets look at the Docker container I have built for development of esp8266 projects. This container is shared on docker hub <a href="https://registry.hub.docker.com/u/wolfeidau/esp8266-dev/">esp8266-dev</a>. To illustrate how this SDK environment is built take a look at the <a href="https://github.com/wolfeidau/ansible-esp8266-role/blob/master/tasks/main.yml">ansible-esp8266-role</a> which is used to bootstrap it.</p>
<p>To get started you will need to setup docker on your system, in my case I am using <a href="http://boot2docker.io/">boot2docker</a> which works on OSX and Windows.</p>
<p>The first thing to understand with boot2docker is that your <code>/User</code> folder on OSX is configured as a shared folder in boot2docker virtual machine. This makes it easy to import data all the way through from OSX to your docker container. For more information on how this works see <a href="https://github.com/boot2docker/boot2docker/blob/master/README.md#virtualbox-guest-additions">VirtualBox Guest Additions</a> in the boot2docker project.</p>
<p>In my case I am using a project based on the <a href="https://github.com/esp8266/source-code-examples/tree/master/basic_example">esp8266/source-code-examples</a> basic example.</p>
<p>So lets add a <code>Dockerfile</code> to this project, note that I have updated the <code>WORKDIR</code> to match the path to my ESP project.</p>
<pre tabindex="0"><code>FROM wolfeidau/esp8266-dev:1.1.0

# add all the SDK stuff to the PATH
ENV PATH=$PATH:/opt/Espressif/crosstool-NG/builds/xtensa-lx106-elf/bin

# Path which contains your esp8266 project source code
WORKDIR /Users/markw/Code/Systems/esp8266-led-lamp

# pass -v /Users:/Users to ensure your shared folder is available within 
# the container for builds.
VOLUME /Users

USER ubuntu
</code></pre><p>Then to build your container just run.</p>
<pre tabindex="0"><code>docker build -t esp8266-build .
</code></pre><p>And when you want to do a build run the following:</p>
<pre tabindex="0"><code>docker run -v /Users:/Users -i -t esp8266-build make
</code></pre><p>Now if you check in your <code>firmware</code> directory you should have the files required to flash your esp2866 project!</p>
<p>If we want to mess around inside the container we can run.</p>
<pre tabindex="0"><code>docker run -v /Users:/Users -i -t esp8266-build bash
</code></pre><p>This project is something I am working on at the moment <a href="https://github.com/wolfeidau/esp8266-led-lamp">esp8266-led-lamp</a>.</p>
<p>So in summary we have installed boot2docker and built an esp8266 project with little or on messing around with complex SDK setup. In my view this is a big step forward in shortening the time to build for hardware projects, and simplifying delivery of complex SDK environments.</p>
<p>Hopefully the likes of <a href="http://www.atmel.com/">Atmel</a> and <a href="http://www.ti.com/">Texas Instruments</a> look at using Docker in the future.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Github Security</title>
      <link>https://www.wolfe.id.au/2015/05/06/github-security/</link>
      <pubDate>Wed, 06 May 2015 16:56:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2015/05/06/github-security/</guid>
      <description>&lt;p&gt;Having worked with &lt;a href=&#34;https://github.com&#34;&gt;Github&lt;/a&gt; for the last six years, commercially for the last three I thought I would do a post on security, or more specifically, protecting your projects, and the ones you work on for others, this may be friends, or it could be a company. Either way the aim of this post is to encourage you to review the security of your personal Github account.&lt;/p&gt;
&lt;h2 id=&#34;two-factor-authentication&#34;&gt;Two Factor Authentication&lt;/h2&gt;
&lt;p&gt;As soon as you start build things in Github for other people I recommend you enable two factor authentication, the process is pretty straight forward and services such as &lt;a href=&#34;https://www.authy.com/&#34;&gt;Authy&lt;/a&gt; and &lt;a href=&#34;https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&amp;amp;hl=en&#34;&gt;Google Authenticator&lt;/a&gt; make this even easier.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Having worked with <a href="https://github.com">Github</a> for the last six years, commercially for the last three I thought I would do a post on security, or more specifically, protecting your projects, and the ones you work on for others, this may be friends, or it could be a company. Either way the aim of this post is to encourage you to review the security of your personal Github account.</p>
<h2 id="two-factor-authentication">Two Factor Authentication</h2>
<p>As soon as you start build things in Github for other people I recommend you enable two factor authentication, the process is pretty straight forward and services such as <a href="https://www.authy.com/">Authy</a> and <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&amp;hl=en">Google Authenticator</a> make this even easier.</p>
<h2 id="ssh-keys">SSH Keys</h2>
<p>Firstly please go to your <a href="https://github.com/settings/ssh">Github SSH Keys Page</a> and take a look at the number of SSH keys you have, either you can click the link or login to your Github account and select the settings icon in the top right hand corner.</p>
<ul>
<li>Are they all currently being used?</li>
<li>Do you know where they all are?</li>
</ul>
<p>Ideally use the key finger print to work out which ones you are using, and clear out any old ones. Most importantly, locate and verify where the all are, these keys are a gateway into all your stuff, make sure you keep them safe.</p>
<p>If your on OSX or Linux, you can see all the finger prints for your SSH keys by run the following command.</p>
<pre tabindex="0"><code>find ~/.ssh -name \*.pub | xargs -n1 ssh-keygen -lf
</code></pre><h2 id="third-party-applications">Third Party Applications</h2>
<p>Next, lets talk about third party applications, I am not sure about you but as time goes on I seem to be getting access to an increasing tail of projects and organizations. Now in doing so, I also share that access with some third party services. This is especially the case with some of the early ones as their level of access was quite broad.</p>
<p>So how about we visit the <a href="https://github.com/settings/applications">Github Applications Page</a> and clear out any you currently don&rsquo;t use, I am especially talking about services that have access to ALL your repositories such as <a href="https://travis-ci.org/">travis</a>, <a href="https://drone.io/">drone</a>.</p>
<p>Now there is nothing wrong with these specific services, however the level of access they require to operate is typically &ldquo;all the things&rdquo;.</p>
<p>If you do run a Github organisation you should also review the <a href="https://help.github.com/articles/about-third-party-application-restrictions/">third-party application restrictions</a> features.</p>
<h2 id="continious-integration-services">Continious Integration Services</h2>
<p>So if your using Github for a few personal projects, most of which are open source then services such as travis and drone are fantastic. But if your like me and work on Github for a living then you probably have access to code that belongs to someone else. If this is the case I recommend being a bit more careful about who and what you give access to all your repositories.</p>
<p>So how do we solve this problem, getting the most out of these services, while also controlling what they have access to see and possibly modify?</p>
<p>Rather than using your account as a gateway into the miriad of projects you have access too, how about creating a build user for that task? Given how easy it is to manage a variety of logins with services such as <a href="https://agilebits.com/onepassword">1password</a> and the like, maintaining an extra account is pretty simple.</p>
<p>So what other advantages are their to using a separate user for this?</p>
<ul>
<li>Lets you tightly control what services can access, for example only giving CI services read access to projects which need builds run.</li>
<li>If you go on holidays, or change roles, other team members can take over responsibility for this account and things will continue to function.</li>
<li>Enables you to centralise all ssh-keys used for builds or deployments.</li>
</ul>
<p>The way this works is you setup a CI user add it to the organisation, then setup a CI team in Github which to manage this users access to projects. If necessary the credentials can be shared by a couple of key people within the team, so you can actually have a holiday in peace.</p>
<p>When keys are required for automation, or a service requires access to repositories, you login as that user and set them up, likewise all SSH keys used for automation go on this account.</p>
<p>Once setup you have essentially sandboxed access by external services to an account which has only what it needs to operate. This in turn leaves your account with as few links to external services as possible.</p>
<h2 id="summary">Summary</h2>
<p>So by the end we should have:</p>
<ul>
<li>Enabled two factor authentication on your account.</li>
<li>Done a cleanup of our SSH keys and worked out where they all are stored.</li>
<li>Cleared out unused third party applications.</li>
<li>Reconfigured continuous integration services to use a separate user in your organisation and restricted that to the bare minimum access.</li>
</ul>
<h2 id="closing-thoughts">Closing Thoughts</h2>
<p>I hope I have at least encouraged you to review your Github account and give it a bit of a spring clean. I am sure there are a few other things you can do to tighten up access to the things you work on, your welcome to leave a comment if you have any other ways to improve the security of your Github account.</p>
<p>Also thanks to <a href="https://twitter.com/michaelneale">@michaelneale</a> for proof reading, and <a href="https://twitter.com/adam__brady">@adam__brady</a> for reminding me that not everyone has two factor authentication enabled, and <a href="https://twitter.com/johnbarton">@johnbarton</a> for highlighting the third party restrictions feature I wasn&rsquo;t aware existed.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>syslog logging driver for Docker</title>
      <link>https://www.wolfe.id.au/2015/05/03/syslog-logging-driver-for-docker/</link>
      <pubDate>Sun, 03 May 2015 20:18:23 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2015/05/03/syslog-logging-driver-for-docker/</guid>
      <description>&lt;p&gt;This post will illustrate how to use the new &lt;a href=&#34;https://docs.docker.com/reference/run/#logging-drivers-log-driver&#34;&gt;syslog logging driver&lt;/a&gt; for &lt;a href=&#34;http://docker.com&#34;&gt;Docker&lt;/a&gt;, along with some notes on using &lt;a href=&#34;https://docs.docker.com/machine/&#34;&gt;docker-machine&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The first thing to be clear on is the syslog logging driver enables you to relay the log messages written to stdout/stderr within a container, to syslog service on the host machine.&lt;/p&gt;
&lt;p&gt;For this example I am going to do this for an agent of the &lt;a href=&#34;https://buildkite.com&#34;&gt;buildkite&lt;/a&gt; continuous integration (CI) service running inside a Docker container.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This post will illustrate how to use the new <a href="https://docs.docker.com/reference/run/#logging-drivers-log-driver">syslog logging driver</a> for <a href="http://docker.com">Docker</a>, along with some notes on using <a href="https://docs.docker.com/machine/">docker-machine</a>.</p>
<p>The first thing to be clear on is the syslog logging driver enables you to relay the log messages written to stdout/stderr within a container, to syslog service on the host machine.</p>
<p>For this example I am going to do this for an agent of the <a href="https://buildkite.com">buildkite</a> continuous integration (CI) service running inside a Docker container.</p>
<p>To ensure these logs are only retained on the host for a short time <a href="http://linuxcommand.org/man_pages/logrotate8.html">logrotate</a> will be imployed to rotate and delete aging logs.</p>
<p>Firstly to use the new log drivers you need Docker 1.6, in my case I used <code>docker-machine</code> to spin up a new system on <a href="https://www.digitalocean.com/">digital ocean</a> as follows.</p>
<pre tabindex="0"><code>docker-machine create -d digitalocean \
	--digitalocean-access-token XXXXX \ 
	--digitalocean-image &#39;ubuntu-14-04-x64&#39; \
	--digitalocean-region &#39;sfo1&#39; \
	--digitalocean-size &#39;1gb&#39; buildkite01
</code></pre><p>Note: You will need to grab your access token from the digital ocean dashboard.</p>
<p>Once the system is up and running use <code>docker-machine</code> to log into the host.</p>
<pre tabindex="0"><code>docker-machine ssh buildkite01
</code></pre><p>Configure rsyslog to isolate the Docker logs into their own file. To do this create <code>/etc/rsyslog.d/10-docker.conf</code> and copy the following content into the file using your favorite text editor.</p>
<pre tabindex="0"><code># Docker logging
daemon.* {
 /var/log/docker.log
 stop
}
</code></pre><p>In summary this will write all logs for the daemon category to <code>/var/log/docker.log</code> then stop processing that log entry so it isn&rsquo;t written to the systems default syslog file.</p>
<p>Now we configure <code>logrotate</code> to roll and archive these files. Create the <code>/etc/logrotate.d/docker</code> file and again copy the content as follows to it.</p>
<pre tabindex="0"><code>/var/log/docker.log {
    size 100M
    rotate 2
    missingok
    compress
}
</code></pre><p>This entry will roll the <code>docker.log</code> when it gets above 100MB and retain two of these files.</p>
<p>Note: The <code>logrotate</code> cron job only runs daily by default, so this check will only be done at the end of the day.</p>
<p>Now restart the <a href="http://www.rsyslog.com/">rsyslog</a> service.</p>
<pre tabindex="0"><code>service rsyslog restart
</code></pre><p>Now build your Docker container on your Docker machine from your laptop as follows. This will transfer the contents of <code>.</code> to the remote system and build your Docker container there, then tag it.</p>
<pre tabindex="0"><code>docker $(docker-machine config buildkite01) build -t buildkite/wolfeidau-golang:latest .
</code></pre><p>Then launch a named container, in this case we are passing <code>--log-driver=syslog</code> along with other buildkite agent specific information, and the tag we just built.</p>
<pre tabindex="0"><code>docker $(docker-machine config buildkite01) run \
	-d --rm --name buildkite-golang --log-driver=syslog \
	-e BUILDKITE_AGENT_TOKEN=XXX \
	-e BUILDKITE_AGENT_META_DATA=&#34;golang=1.4.2&#34; \
	buildkite/wolfeidau-golang:latest
</code></pre><p>Then we ssh into the remote system and tail the <code>/var/log/docker.log</code> log file we can see output from the buildkite agent.</p>
<pre tabindex="0"><code>$ docker-machine ssh buildkite01
# tail /var/log/docker.log
May  3 05:53:56 buildkite01 docker/7ebdb2baff9c[3207]: 2015-05-03 09:53:56 INFO   Registering agent with Buildkite...
May  3 05:53:57 buildkite01 docker/7ebdb2baff9c[3207]: 2015-05-03 09:53:57 INFO   Successfully registred agent &#34;7ebdb2baff9c&#34; with meta-data [golang=1.4.2]
</code></pre><p>So at the moment some things are on the horizon for this driver including <a href="https://github.com/docker/docker/pull/12668">changing the tag</a> and possibly remote server configuration.</p>
<p>That is the end of this brief post, hopefully it helps others get started with this new feature, and provides some tips on how to configure the rsyslog service to work with it.</p>
<p>Lastly I typically use <a href="http://www.ansible.com/home">ansible</a> to automate this sort of thing, but it is nice to walk through it at least once.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Using Sublime Text for Go Development</title>
      <link>https://www.wolfe.id.au/2015/03/05/using-sublime-text-for-go-development/</link>
      <pubDate>Thu, 05 Mar 2015 00:10:18 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2015/03/05/using-sublime-text-for-go-development/</guid>
      <description>&lt;p&gt;For the last 6 months I have been using &lt;a href=&#34;http://golang.org&#34;&gt;Go&lt;/a&gt; as my primary development language and for a large part of that I have been using &lt;a href=&#34;http://www.sublimetext.com/3&#34;&gt;sublime text 3&lt;/a&gt;. Along the way the go developers have released quite a few handy and time saving tools which have all been supported by &lt;a href=&#34;https://github.com/DisposaBoy/GoSublime&#34;&gt;GoSublime&lt;/a&gt; with some assembly required. This post will provide a rundown on how to setup go-sublime and the array of tools which make golang development as productive as possible.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For the last 6 months I have been using <a href="http://golang.org">Go</a> as my primary development language and for a large part of that I have been using <a href="http://www.sublimetext.com/3">sublime text 3</a>. Along the way the go developers have released quite a few handy and time saving tools which have all been supported by <a href="https://github.com/DisposaBoy/GoSublime">GoSublime</a> with some assembly required. This post will provide a rundown on how to setup go-sublime and the array of tools which make golang development as productive as possible.</p>
<p>So firstly if you&rsquo;re new to golang then before you start setup your workspace, firstly watch this video <a href="https://www.youtube.com/watch?v=XCsL89YtqCs">Writing, building, installing, and testing Go code</a>.</p>
<p>When I am setting a new system up I typically run the following commands in OSX or Linux, this example is of course for bash, if you use zsh I am sure you can adapt this where needed.</p>
<pre tabindex="0"><code>echo &#39;export GOPATH=$HOME/Code/go&#39; &gt;&gt; ~/.bash_profile
echo &#39;export PATH=$PATH:$GOPATH/bin&#39; &gt;&gt; ~/.bash_profile 
source ~/.bash_profile
mkdir -p ~/Code/go/src/github.com/wolfeidau
cd !$
</code></pre><p><em>Note:</em> If you&rsquo;re on OSX you should use <code>.bash_profile</code>, on Linux you typically use your <code>.bashrc</code>.</p>
<p>Once i have done this I can either clone a project from my <a href="https://github.com">github</a> or make a directory for a new one.</p>
<p>Now that you have setup your environment setup you can install some tools.</p>
<pre tabindex="0"><code>go get -u golang.org/x/tools/cmd/goimports
go get -u golang.org/x/tools/cmd/vet
go get -u golang.org/x/tools/cmd/oracle
go get -u golang.org/x/tools/cmd/godoc
</code></pre><p>Then install <a href="https://packagecontrol.io/installation">package control</a> in your Sublime editor and add the following plugins.</p>
<ul>
<li><a href="https://github.com/DisposaBoy/GoSublime">GoSublime</a></li>
<li><a href="https://github.com/waigani/GoOracle">GoOracle</a></li>
</ul>
<p>Then using update your GoSublime user configuration by opening Preferences -&gt; Package Settings -&gt; GoSublime -&gt; Settings User which should open your <code>GoSublime.sublime-settings</code> file, below is the contents of mine.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-javascript" data-lang="javascript"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">	<span class="c1">// you may set specific environment variables here
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	<span class="c1">// e.g &#34;env&#34;: { &#34;PATH&#34;: &#34;$HOME/go/bin:$PATH&#34; }
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	<span class="c1">// in values, $PATH and ${PATH} are replaced with
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	<span class="c1">// the corresponding environment(PATH) variable, if it exists.
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	<span class="s2">&#34;env&#34;</span><span class="o">:</span> <span class="p">{</span><span class="s2">&#34;GOPATH&#34;</span><span class="o">:</span> <span class="s2">&#34;$HOME/Code/go&#34;</span><span class="p">,</span> <span class="s2">&#34;PATH&#34;</span><span class="o">:</span> <span class="s2">&#34;$GOPATH/bin:$PATH&#34;</span> <span class="p">},</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  <span class="s2">&#34;fmt_cmd&#34;</span><span class="o">:</span> <span class="p">[</span><span class="s2">&#34;goimports&#34;</span><span class="p">],</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">	<span class="c1">// enable comp-lint, this will effectively disable the live linter
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	<span class="s2">&#34;comp_lint_enabled&#34;</span><span class="o">:</span> <span class="kc">true</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">	<span class="c1">// list of commands to run
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	<span class="s2">&#34;comp_lint_commands&#34;</span><span class="o">:</span> <span class="p">[</span>
</span></span><span class="line"><span class="cl">	    <span class="c1">// run `golint` on all files in the package
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="c1">// &#34;shell&#34;:true is required in order to run the command through your shell (to expand `*.go`)
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="c1">// also see: the documentation for the `shell` setting in the default settings file ctrl+dot,ctrl+4
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="p">{</span><span class="s2">&#34;cmd&#34;</span><span class="o">:</span> <span class="p">[</span><span class="s2">&#34;golint *.go&#34;</span><span class="p">],</span> <span class="s2">&#34;shell&#34;</span><span class="o">:</span> <span class="kc">true</span><span class="p">},</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">	    <span class="c1">// run go vet on the package
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="p">{</span><span class="s2">&#34;cmd&#34;</span><span class="o">:</span> <span class="p">[</span><span class="s2">&#34;go&#34;</span><span class="p">,</span> <span class="s2">&#34;vet&#34;</span><span class="p">]},</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">	    <span class="c1">// run `go install` on the package. GOBIN is set,
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="c1">// so `main` packages shouldn&#39;t result in the installation of a binary
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="p">{</span><span class="s2">&#34;cmd&#34;</span><span class="o">:</span> <span class="p">[</span><span class="s2">&#34;go&#34;</span><span class="p">,</span> <span class="s2">&#34;install&#34;</span><span class="p">]}</span>
</span></span><span class="line"><span class="cl">	<span class="p">],</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">	<span class="s2">&#34;on_save&#34;</span><span class="o">:</span> <span class="p">[</span>
</span></span><span class="line"><span class="cl">	    <span class="c1">// run comp-lint when you save,
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="c1">// naturally, you can also bind this command `gs_comp_lint`
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="c1">// to a key binding if you want
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>	    <span class="p">{</span><span class="s2">&#34;cmd&#34;</span><span class="o">:</span> <span class="s2">&#34;gs_comp_lint&#34;</span><span class="p">}</span>
</span></span><span class="line"><span class="cl">	<span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span></span></span></code></pre></div>
<p><em>Note:</em> Ensure you update the <code>GOPATH</code> value to match the one configured earlier.</p>
<p>Once you restart sublime you should be ready to roll!</p>
<p>In addition to these plugins I also use <a href="https://github.com/jisaacks/GitGutter">GitGutter</a> which provides some highlighting of changes for source code under <code>git</code>.</p>
<p>Lastly just another tip to enable spell check in markdown files.</p>
<p>Open any markdown file then go to Preferences -&gt; Settings - More -&gt; Syntax Specific - User which should open your <code>Markdown.sublime-settings</code> file, below is the contents of mine.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-javascript" data-lang="javascript"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="s2">&#34;spell_check&#34;</span><span class="o">:</span> <span class="kc">true</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span></span></span></code></pre></div>
<p>Thanks to <a href="https://twitter.com/bamse">Marius Ursache</a> for his feedback.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Golang can I have a package manager please?</title>
      <link>https://www.wolfe.id.au/2014/03/23/golang-can-i-have-a-package-manager-please/</link>
      <pubDate>Sun, 23 Mar 2014 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2014/03/23/golang-can-i-have-a-package-manager-please/</guid>
      <description>&lt;p&gt;Currently &lt;a href=&#34;http://golang.org&#34;&gt;golang&lt;/a&gt; has no &lt;em&gt;standard&lt;/em&gt; package manager; this in my view makes it a harder for those new to the language. golang has for the most part been a pleasure to use, built-in features are well thought out and help new users get started fast. Dependency management is by contrast a jarring and painful experience for those starting with golang. To understand why I believe this is the case I have put together a proposal of sorts for a package manager.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Currently <a href="http://golang.org">golang</a> has no <em>standard</em> package manager; this in my view makes it a harder for those new to the language. golang has for the most part been a pleasure to use, built-in features are well thought out and help new users get started fast. Dependency management is by contrast a jarring and painful experience for those starting with golang. To understand why I believe this is the case I have put together a proposal of sorts for a package manager.</p>
<h2 id="what-does-a-package-manager-do">What does a package manager do?</h2>
<p>A package manager&rsquo;s core <strong>job</strong> is to:</p>
<ul>
<li>Provide a simple method of retrieving a snapshot of upstream libraries.</li>
<li>Enable a user to maintain compatible versions of these upstream libraries therefore avoiding broken builds when one of these modules is changed.</li>
<li>Let people maintaining libraries communicate differing levels of interface/module changes to downstream users via semantic versioning.</li>
</ul>
<h2 id="the-current-solution">The current solution</h2>
<p>Firstly <code>go get</code> is a simple solution to decentralised retrieval of dependencies, without much grief a developer can pull down and maintain a copy of upstream libraries. The main issue is that this currently only solves half of the problem, developers are left to <code>manage</code> these dependencies themselves, including vetting all changes upstream.</p>
<p>This is an issue because:</p>
<ul>
<li>Developers assume this is a solved problem, not something they have to deal with while also learning a new language.</li>
<li>Those new to development get the double whammy of having no idea where to even start with this problem.</li>
</ul>
<p>Now on the other side of the fence, as a library writer I have no standard way to communicate massive breaks in API compatibility. Again this disregards many years of learning/development of things like semantic versioning.</p>
<p>From what I have observed this has led most larger projects to naturally limit the number of external dependencies in use.</p>
<p>Lastly without a nice solution for modularising software most developers won&rsquo;t bother.</p>
<h2 id="why-add-versions">Why add versions?</h2>
<p>In my view the value that semantic versioning adds to dependency cannot be disregarded. As stated it gives both consumers and producers a way of communicating changes. In addition to this it provides people with a way of addressing a particular release of an API, therefore alleviating a confusion about &ldquo;which&rdquo; snapshot of a library a developer is having issues with.</p>
<p>The assumption that everyone is always up-to-date is disregarding the reality of how people maintain software in the greater software development community.</p>
<h2 id="can-it-be-done-using-a-decentralised-model">Can it be done using a decentralised model?</h2>
<p>So this raises the question of whether or not versioning can be added to go without introducing a central package management system.</p>
<p>It is my opinion that simply encouraging people to manage their project using a standard versioning scheme, with a standard method for tagging their versions would be a good starting point.</p>
<p>At some point it would be helpful to have:</p>
<ol>
<li>A central meta data store for download statistics, this is helpful to package maintainers to raise awareness of the impact of change on others. This would really fill the gaps in existing systems such as <a href="http://github.com">github</a> around exposing download counts for a given project.</li>
<li>A distributed snapshot repository of packages, just to avoid the inevitable deletion or reorganisation of repositories (YES THIS HAPPENS).</li>
</ol>
<p>Note: I don&rsquo;t think github should have to provide this by the way.</p>
<h2 id="what-is-the-shortest-path-to-easing-pain-for-new-users">What is the shortest path to easing pain for new users?</h2>
<p>Based on my review <strong>tool</strong> just needs to do the following:</p>
<ol>
<li>Simple command line tool, one entry point and minimal options.</li>
<li>The ability to manage versioning of my project using the same tool.</li>
<li>A method of retrieving different versions of a dependency with an option to save this to a file.</li>
<li>Automatic upgrades of dependencies based on a known standard versioning scheme such as <a href="http://semver.org/">semver</a>.</li>
<li>Fall back to the existing <code>go get</code> model with an obvious report at the end indicating this dependency doesn&rsquo;t provide versions.</li>
<li>Provide an idiomatic way to store this version information, based on reflection on existing features for maintaining meta data. My view is this should be a <code>version.go</code> in each project.</li>
</ol>
<p>This is in response to a <a href="https://gist.github.com/davecheney/9716518">gist</a> posted by <a href="https://twitter.com/davecheney">Dave Cheney</a> after a <a href="https://twitter.com/davecheney/status/447497663609450496">discussion</a> on twitter with <a href="https://twitter.com/mitchellh">Mitchell Hashimoto</a>.</p>
<p>Thanks to <a href="https://twitter.com/deoxxa">Conrad</a> and <a href="https://twitter.com/nicholasf">Nicholas Faiz</a> for their feedback and critique.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Configuring a simple IPv6 network</title>
      <link>https://www.wolfe.id.au/2014/03/02/configuring-a-simple-ipv6-network/</link>
      <pubDate>Sun, 02 Mar 2014 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2014/03/02/configuring-a-simple-ipv6-network/</guid>
      <description>&lt;p&gt;Recently I have been helping &lt;a href=&#34;https://twitter.com/geekscape&#34;&gt;Andy Gelme&lt;/a&gt; with a project which uses &lt;a href=&#34;contiki-os.org&#34;&gt;contiki-os&lt;/a&gt;, and &lt;a href=&#34;http://en.wikipedia.org/wiki/6LoWPAN&#34;&gt;6lowpan&lt;/a&gt; on a device called a &lt;a href=&#34;http://www.m9design.co/&#34;&gt;MeshThing&lt;/a&gt;. This required us to setup a small &lt;a href=&#34;http://en.wikipedia.org/wiki/IPv6&#34;&gt;IPv6&lt;/a&gt; network from scratch, independent of the internet, this turned out to be quite a bit different an objective of most of the how to&amp;rsquo;s we found so I decided to document our method, as much for others as myself.&lt;/p&gt;
&lt;p&gt;In our case the network looked as follows:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I have been helping <a href="https://twitter.com/geekscape">Andy Gelme</a> with a project which uses <a href="contiki-os.org">contiki-os</a>, and <a href="http://en.wikipedia.org/wiki/6LoWPAN">6lowpan</a> on a device called a <a href="http://www.m9design.co/">MeshThing</a>. This required us to setup a small <a href="http://en.wikipedia.org/wiki/IPv6">IPv6</a> network from scratch, independent of the internet, this turned out to be quite a bit different an objective of most of the how to&rsquo;s we found so I decided to document our method, as much for others as myself.</p>
<p>In our case the network looked as follows:</p>
<p></p>
<p>In this diagram we have macbook A which is connected to a MeshThing running contiki-os connected via serial over USB. The ipv6 connection is provided by <code>slip6</code>, which gives use a point to point link over the serial connection. The command to start <code>tunslip6</code> is as follows:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">sudo ./tunslip6 -B 38400 -s /dev/tty.usbmodem1411 aaaa::1/64
</span></span><span class="line"><span class="cl">********SLIP started on ``/dev/tty.usbmodem1411&#39;&#39;
</span></span><span class="line"><span class="cl">opened tun device ``/dev/tun0&#39;&#39;
</span></span><span class="line"><span class="cl">ifconfig tun0 inet6 up
</span></span><span class="line"><span class="cl">ifconfig tun0 inet6 aaaa::1/64 add
</span></span><span class="line"><span class="cl">sysctl -w net.inet6.ip6.forwarding=1
</span></span><span class="line"><span class="cl">net.inet6.ip6.forwarding: 1 -&gt; 1
</span></span><span class="line"><span class="cl">ifconfig tun0
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">tun0: flags=8851&lt;UP,POINTOPOINT,RUNNING,SIMPLEX,MULTICAST&gt; mtu 1500
</span></span><span class="line"><span class="cl">	inet6 fe80::bae8:56ff:ffff:ffff%tun0 prefixlen 64 optimistic scopeid 0xc
</span></span><span class="line"><span class="cl">	inet6 aaaa::1 prefixlen 64 tentative
</span></span><span class="line"><span class="cl">	nd6 options=1&lt;PERFORMNUD&gt;
</span></span><span class="line"><span class="cl">	open (pid 51393)
</span></span><span class="line"><span class="cl">RPL started
</span></span><span class="line"><span class="cl">Online
</span></span><span class="line"><span class="cl">*** Address:aaaa::1 =&gt; aaaa:0000:0000:0000
</span></span><span class="line"><span class="cl">Got configuration message of type P
</span></span><span class="line"><span class="cl">Setting prefix aaaa::
</span></span><span class="line"><span class="cl">Server IPv6 addresses:
</span></span><span class="line"><span class="cl"> aaaa::11:22ff:ffff:ffff
</span></span><span class="line"><span class="cl"> fe80::11:22ff:ffff:ffff</span></span></code></pre></div>
<p>So this command, for those not aware is using my USB serial device, running at 38400 baud to establish a link to the device.</p>
<p>At the end of this command we are providing a prefix of <code>aaaa::1/64</code>, which enables the device to pick it&rsquo;s own IPv6 address using <a href="http://en.wikipedia.org/wiki/IPv6#Stateless_address_autoconfiguration_.28SLAAC.29">Stateless Address Autoconfiguration</a> (SLAAC). It is important to note that this means the prefix needs to use a <code>/64</code> mask.</p>
<p>Now as we can see in our original diagram we now need to route packets between the <code>aaaa::1/64</code> network assigned to our mesh.</p>
<p>Given we need to enable routing between macbook B and the MeshThing we need the local wireless network to provide an IPv6 prefix for auto configuration of this host.</p>
<p>As macbook B is going to be routing we will get it to advertise our prefix of <code>bbbb::1/64</code> on the existing wireless lan.</p>
<p>This is done using <code>rtadvd</code> in BSD derivatives, and <code>rdvd</code> on linux. So to get this running on OSX edit your <code>/etc/rtadvd.conf</code> as root, and add the following line.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">en1:<span class="se">\
</span></span></span><span class="line"><span class="cl"><span class="se"></span> :addr<span class="o">=</span><span class="s2">&#34;bbbb::1&#34;</span>:prefixlen#64:</span></span></code></pre></div>
<p>On linux we can use <code>radvd</code>, which can be installed via <code>apt-get</code> on ubuntu. This is configured via <code>/etc/radvd.conf</code> with the equivalent configuration below.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">interface eth1 <span class="o">{</span>
</span></span><span class="line"><span class="cl">  <span class="c1">## (Send advertisement messages to other hosts)</span>
</span></span><span class="line"><span class="cl">  AdvSendAdvert on<span class="p">;</span>
</span></span><span class="line"><span class="cl">  <span class="c1">## IPv6 subnet prefix</span>
</span></span><span class="line"><span class="cl">  prefix bbbb::1::/64 <span class="o">{</span>
</span></span><span class="line"><span class="cl">    AdvOnLink on<span class="p">;</span>
</span></span><span class="line"><span class="cl">    AdvAutonomous on<span class="p">;</span>
</span></span><span class="line"><span class="cl">  <span class="o">}</span><span class="p">;</span>
</span></span><span class="line"><span class="cl"><span class="o">}</span><span class="p">;</span></span></span></code></pre></div>
<p>So in this case we are using en1 on macbook A is the wireless interface. We start <code>rtadvd</code>, this will after about 4 seconds send out a route advertisement which will trigger auto-configuration of all hosts on wireless network, therefore providing a prefix to macbook B which it will used by SLAAC to generate an IPv6 address.</p>
<p>To check our configuration worked correctly, first thing you will notice is that all hosts on the lan now have an IPv6 address in the prefix range.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="o">[</span>~<span class="o">]</span>$ ifconfig en0 <span class="p">|</span> grep inet6
</span></span><span class="line"><span class="cl">	inet6 fe80::bae8:56ff:ffff:ffff%en0 prefixlen <span class="m">64</span> scopeid 0x4
</span></span><span class="line"><span class="cl">	inet6 bbbb::bae8:56ff:ffff:ffff prefixlen <span class="m">64</span> autoconf
</span></span><span class="line"><span class="cl">	inet6 bbbb::615e:319:aaaa:aaaa prefixlen <span class="m">64</span> autoconf temporary</span></span></code></pre></div>
<p>So in order these adresses are:</p>
<ul>
<li>Link Local address, these always begin with <code>fe80::</code> and also includes our MAC address of <code>b8:e8:56:ff:ff:ff</code>.</li>
<li>SLAAC address, which is a combination of the prefix and our MAC.</li>
<li>One temporary address, which hides the network card address, this is the one that should be &ldquo;used by applications&rdquo;, no idea what this means Wireshark says everything on the LAN is using the MAC based IPv6 addresses I need to RTFM (read the fine manual) more.</li>
</ul>
<p>To see neighbours in OSX you can use <code>ndp</code>, the <code>R</code> flag in the example below indicates which host is a router.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="o">[</span>~<span class="o">]</span>$ ndp -an
</span></span><span class="line"><span class="cl">Neighbor                        Linklayer Address  Netif Expire    St Flgs Prbs
</span></span><span class="line"><span class="cl">bbbb::bae8:56ff:ffff:ffff        b8:e8:56:ff:ff:ff     en0 permanent R</span></span></code></pre></div>
<p>On linux the equivalent command is <code>ip -6 neigh show</code>.</p>
<p>So for Linux, IOS and Android this is all we need to do to provide get the routing to work, however on OSX clients we have one additional step.</p>
<p>As OSX doesn&rsquo;t accept the route advertisements by default, a route solicitation daemon called <code>rtsold</code> needs to be running on all client machines aside from the router. Below is the command I ran on macbook B, note on this machine the wireless adapter was <code>en0</code>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo rtsold en0</span></span></code></pre></div>
<p>Once this is started you should be able to ping the MeshThing from macbook B on the wireless lan.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="o">[</span>~<span class="o">]</span>$ ping6 aaaa::11:22ff:ffff:ffff
</span></span><span class="line"><span class="cl">PING6<span class="o">(</span><span class="nv">56</span><span class="o">=</span>40+8+8 bytes<span class="o">)</span> aaaa::1 --&gt; aaaa::11:22ff:ffff:ffff
</span></span><span class="line"><span class="cl"><span class="m">16</span> bytes from aaaa::11:22ff:ffff:ffff, <span class="nv">icmp_seq</span><span class="o">=</span><span class="m">0</span> <span class="nv">hlim</span><span class="o">=</span><span class="m">64</span> <span class="nv">time</span><span class="o">=</span>34.186 ms
</span></span><span class="line"><span class="cl"><span class="m">16</span> bytes from aaaa::11:22ff:ffff:ffff, <span class="nv">icmp_seq</span><span class="o">=</span><span class="m">1</span> <span class="nv">hlim</span><span class="o">=</span><span class="m">64</span> <span class="nv">time</span><span class="o">=</span>33.553 ms
</span></span><span class="line"><span class="cl">--- aaaa::11:22ff:fe33:4401 ping6 statistics ---
</span></span><span class="line"><span class="cl"><span class="m">2</span> packets transmitted, <span class="m">2</span> packets received, 0.0% packet loss
</span></span><span class="line"><span class="cl">round-trip min/avg/max/std-dev <span class="o">=</span> 33.553/33.870/34.186/0.316 ms</span></span></code></pre></div>
<p>This is a part of my ongoing hardware hacking, for more details on how this started see <a href="http://wolfe.id.au/2013/12/22/adding-an-icsp-header-to-the-atmega256rfr2/">Adding an ICSP header to the ATmega256RFR2 </a>. As a note both my <a href="atmel.com">Atmel</a> board and the MeshThing use the ATmega256RFR2 chips.</p>
<p>The next goal is to setup some name services, probably via MDNS, and test accessing the web site (yes on the 8 bit micro controller) from a mobile phone connected to the wireless network.</p>
<h2 id="update">Update</h2>
<p>Need to look into using site-local addresses in the <code>fc00::/7</code> block rather than unallocated public space, as recommended by <a href="https://twitter.com/LapTop006">Julien Goodwin</a> see <a href="http://en.wikipedia.org/wiki/Unique_local_address">unique local addresses</a>.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Getting a new node project started with npm</title>
      <link>https://www.wolfe.id.au/2014/02/01/getting-a-new-node-project-started-with-npm/</link>
      <pubDate>Sat, 01 Feb 2014 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2014/02/01/getting-a-new-node-project-started-with-npm/</guid>
      <description>&lt;p&gt;The aim of this guide is to get people getting started with &lt;a href=&#34;http://nodejs.org&#34;&gt;Node.js&lt;/a&gt; and &lt;a href=&#34;http://npmjs.org/&#34;&gt;npm&lt;/a&gt;, while also showing some of the handy commands I use to bootstrap my projects.&lt;/p&gt;
&lt;p&gt;Create your project directory.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;mkdir npmtest
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Change to the directory you just created.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;cd !$
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Tell git to make a repo.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;git init
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Pull down a preconfigured &lt;code&gt;.gitignore&lt;/code&gt; file for node projects from github.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;wget https://raw.githubusercontent.com/github/gitignore/master/Node.gitignore -O .gitignore
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Pull down a basic &lt;code&gt;Makefile&lt;/code&gt; I use for my projects.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The aim of this guide is to get people getting started with <a href="http://nodejs.org">Node.js</a> and <a href="http://npmjs.org/">npm</a>, while also showing some of the handy commands I use to bootstrap my projects.</p>
<p>Create your project directory.</p>
<pre tabindex="0"><code>mkdir npmtest
</code></pre><p>Change to the directory you just created.</p>
<pre tabindex="0"><code>cd !$
</code></pre><p>Tell git to make a repo.</p>
<pre tabindex="0"><code>git init
</code></pre><p>Pull down a preconfigured <code>.gitignore</code> file for node projects from github.</p>
<pre tabindex="0"><code>wget https://raw.githubusercontent.com/github/gitignore/master/Node.gitignore -O .gitignore
</code></pre><p>Pull down a basic <code>Makefile</code> I use for my projects.</p>
<pre tabindex="0"><code>wget https://gist.github.com/wolfeidau/8748317/raw/172a6adb79777676a8815da5719ef659fb66a35b/Makefile
</code></pre><p>This make file contains a few handy targets, these are:</p>
<ul>
<li><code>test</code> runs tests in the <code>test</code> folder using the <a href="http://visionmedia.github.io/mocha/">mocha</a> test framework.</li>
<li><code>jshint</code> uses <a href="http://www.jshint.com/">jshint</a> to check over the code.</li>
<li><code>skel</code> generates a basic structure for my project creating <code>index.js</code> and, <code>lib</code>, <code>example</code> and <code>test</code> directories and installs mocha and <a href="http://chaijs.com">chai</a> the BDD / TDD assertion library.</li>
<li>The default target which is invoked by just running <code>make</code>, this runs the <code>jshint</code> and <code>test</code> targets.</li>
</ul>
<p><em>Note:</em> You will need to install jshint globally using <code>npm install -g jshint</code>.</p>
<p>Now we will use the <code>skel</code> target to generate our project structure.</p>
<pre tabindex="0"><code>make skel
</code></pre><p>Create a project on github using <a href="https://github.com/github/hub">hub</a>, if your on osx you can install this with <a href="https://github.com/Homebrew/homebrew">homebrew</a>. We do this sooner rather than later so npm can pick this information up when building the <code>package.json</code>.</p>
<pre tabindex="0"><code>hub create
</code></pre><p>Now initialise your project.</p>
<pre tabindex="0"><code>npm init
</code></pre><p>This should ask for a bunch of information, note leave the version 0.0.0 we will change this later. For those interested this is driven by <a href="https://github.com/npm/init-package-json">init-package-json</a>.</p>
<pre tabindex="0"><code>...
name: (npmtest)
version: (0.0.0)
description: Some NPM test.
entry point: (index.js)
test command: make test
git repository: (git://github.com/wolfeidau/npmtest.git)
keywords: npm
author: Mark Wolfe &lt;mark@wolfe.id.au&gt;
license: (ISC) MIT
About to write to /Users/markw/Code/Javascript/npmtest/package.json:

{
  &#34;name&#34;: &#34;npmtest&#34;,
  &#34;version&#34;: &#34;0.0.0&#34;,
  &#34;description&#34;: &#34;Some NPM test.&#34;,
  &#34;main&#34;: &#34;index.js&#34;,
  &#34;scripts&#34;: {
    &#34;test&#34;: &#34;make test&#34;
  },
  &#34;repository&#34;: {
    &#34;type&#34;: &#34;git&#34;,
    &#34;url&#34;: &#34;git://github.com/wolfeidau/npmtest.git&#34;
  },
  &#34;keywords&#34;: [
    &#34;npm&#34;
  ],
  &#34;author&#34;: &#34;Mark Wolfe &lt;mark@wolfe.id.au&gt;&#34;,
  &#34;license&#34;: &#34;MIT&#34;,
  &#34;bugs&#34;: {
    &#34;url&#34;: &#34;https://github.com/wolfeidau/npmtest/issues&#34;
  },
  &#34;homepage&#34;: &#34;https://github.com/wolfeidau/npmtest&#34;
}

Is this ok? (yes) yes
</code></pre><p>Once you have added some code to <code>index.js</code> and some tests of course, add and commit your code.</p>
<pre tabindex="0"><code>git add .
git commit -a &#39;Initial release&#39;
</code></pre><p>Now your ready to release use npm to update the version. There are three options for this command being <code>major</code>, <code>minor</code> and <code>patch</code> each of which increments the version numbers in that order. In the example below we should go from <code>0.0.0</code> to <code>0.1.0</code>.</p>
<pre tabindex="0"><code>npm version minor
</code></pre><p>Run your tests!</p>
<pre tabindex="0"><code>npm test
</code></pre><p>Push to github, the version command automatically tags your project so we can check it out if we need!</p>
<pre tabindex="0"><code>git push origin master --tags
</code></pre><p>Ship it.</p>
<pre tabindex="0"><code>npm publish
</code></pre>]]></content:encoded>
    </item>
    
    <item>
      <title>Adding an ICSP header to the ATmega256RFR2</title>
      <link>https://www.wolfe.id.au/2013/12/22/adding-an-icsp-header-to-the-atmega256rfr2/</link>
      <pubDate>Sun, 22 Dec 2013 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2013/12/22/adding-an-icsp-header-to-the-atmega256rfr2/</guid>
      <description>&lt;p&gt;I am currently working on building a simple wireless sensor using the &lt;a href=&#34;http://www.atmel.com/tools/atmega256rfr2-xpro.aspx&#34;&gt;ATmega256RFR2 Xplained Pro Evaluation Kit&lt;/a&gt;. This kit includes a base board and a bunch of other peripherals to mess around with.&lt;/p&gt;
&lt;p&gt;When I started reading up on the kit I noticed it required windows to install some 600mb + more IDE (gasp), to hack on a small micro controller. This really didn&amp;rsquo;t gel with me, especially considering said IDE:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am currently working on building a simple wireless sensor using the <a href="http://www.atmel.com/tools/atmega256rfr2-xpro.aspx">ATmega256RFR2 Xplained Pro Evaluation Kit</a>. This kit includes a base board and a bunch of other peripherals to mess around with.</p>
<p>When I started reading up on the kit I noticed it required windows to install some 600mb + more IDE (gasp), to hack on a small micro controller. This really didn&rsquo;t gel with me, especially considering said IDE:</p>
<ul>
<li>ONLY works on windows</li>
<li>Visual Studio is bloated piece of shit</li>
<li>Has all this other &ldquo;professional&rdquo; features which i don&rsquo;t give a shit about</li>
<li>Is driving a 30mb included gcc tool chain..</li>
</ul>
<p>So how do we ignore this and get started without this thing?</p>
<p>Well firstly we need an ICSP port, this will enable us to write a simple boot loader. This will enable use to load a firmware onto the board and get started doing some basic hacking.</p>
<p>The cable I used had the following coloured wires, to bind these into a six pin connector <a href="https://twitter.com/geekscape">Andy Gelme</a> used a hot glue gun. It is amazing how handy these things are..</p>
<p>The configuration of the header wire colours are:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">BLUE	RED
</span></span><span class="line"><span class="cl">GREEN	ORANGE
</span></span><span class="line"><span class="cl">YELLOW	BROWN</span></span></code></pre></div>
<p>The standard Arduino ICSP 6 pin header is as layer out is:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">MISO	VCC
</span></span><span class="line"><span class="cl">SCK     MOSI
</span></span><span class="line"><span class="cl">RESET	GND</span></span></code></pre></div>
<p>So to wire this up to the board I first had to solder 5 pins into the <code>spare signals header</code>, this enabled me to access the <code>RSTN</code> pin, which in the ICSP header is called <code>RESET</code>.</p>
<p>The wires were connected as follows:</p>
<ul>
<li>MISO (BLUE) PIN17 on the ext5 header</li>
<li>VCC (RED) PIN20 on the ext5 header</li>
<li>SCK (GREEN) PIN18 on the ext5 header</li>
<li>MOSI (ORANGE) PIN16 on the ext5 header</li>
<li>RESET (YELLOW) RSTN on the spare signals header</li>
<li>GND (BROWN) to PIN19 on the ext5 header</li>
</ul>
<p>The final product looks like:</p>
<p></p>
<p>So to write the firmware onto the board a <a href="http://www.freetronics.com/collections/modules/products/usbasp-icsp-programmer-for-avr-arduino">USBasp ICSP Programmer for AVR / Arduino</a> was used. The firmware I am currently running was built by some hackers at <a href="http://www.hackmelbourne.org/">CCHS Melbourne</a>. At the end of the day a fork of <a href="http://www.contiki-os.org/">Contiki</a> was up and running on the board! I am hoping this gets open sourced soon.</p>
<p>The toolset I am using at the moment is <a href="http://www.obdev.at/products/crosspack/index.html">Crosspack AVR</a> which works great on OSX. I also have a build environment on ubuntu in a virtual machine which I am using just to test things.</p>
<p>In addition to this fork of Contiki we also tested running some simple Arduino applications on my board using <code>avrdude</code> to upload the hex files.</p>
<p>The aim of this project is to get an MQTT client onto this board and transmit values from the various sensors to a gateway device running linux over <a href="http://en.wikipedia.org/wiki/IEEE_802.15.4">802.15.4</a> mesh using <a href="http://en.wikipedia.org/wiki/6LoWPAN">6LoWPAN</a>.</p>
<p>Thanks to the people at <a href="http://www.hackmelbourne.org/">CCHS Melbourne</a> for their help.</p>
<p>I am planning to follow up with some posts on each of the layers as I learn more and work my way up the stack.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Heroku logging what I have Learned</title>
      <link>https://www.wolfe.id.au/2013/06/22/heroku-logging-what-i-have-learned/</link>
      <pubDate>Sat, 22 Jun 2013 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2013/06/22/heroku-logging-what-i-have-learned/</guid>
      <description>&lt;p&gt;Recently I started work for a new employer who runs has a number of applications hosted on &lt;a href=&#34;heroku.com&#34;&gt;Heroku&lt;/a&gt;, my
typical strategy when faced with a new environment and systems is try and get some monitoring in place as soon as
possible. I do this for a few reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It enables me to establish a baseline for performance and behavior of the system.&lt;/li&gt;
&lt;li&gt;It lets me quickly zero on any areas I can help with.&lt;/li&gt;
&lt;li&gt;It makes it easier for me to see when I get things wrong while I am still learning the ropes.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So when I talk about monitoring I am mainly talking about collecting some basic performance and health related metrics
which fall into two different types:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I started work for a new employer who runs has a number of applications hosted on <a href="heroku.com">Heroku</a>, my
typical strategy when faced with a new environment and systems is try and get some monitoring in place as soon as
possible. I do this for a few reasons:</p>
<ol>
<li>It enables me to establish a baseline for performance and behavior of the system.</li>
<li>It lets me quickly zero on any areas I can help with.</li>
<li>It makes it easier for me to see when I get things wrong while I am still learning the ropes.</li>
</ol>
<p>So when I talk about monitoring I am mainly talking about collecting some basic performance and health related metrics
which fall into two different types:</p>
<ul>
<li><code>gauges</code>, typically a measurement of the current CPU, Memory usage and latency of upstream APIs, which is typically averaged over a period.</li>
<li><code>counters</code>, typically things like a count requests serviced by a particular rest end point, or emails sent by a background job.</li>
</ul>
<p>So when I have been managing systems in the past I have typically been very careful to follow a neat and clear naming standard
for metrics such as:</p>
<ul>
<li>region, the physical location or region the data center hosting this server is located.</li>
<li>environment, being either development staging or production.</li>
<li>hostname, for host level metrics we include the hostname.</li>
<li>application, which represents the service component of our system.</li>
<li>label, a label for the data being gathered.</li>
<li>metric, the metric which has been gathered over the configured time period.</li>
</ul>
<p>So below we have an example of a gauge of the applications RSS memory captured on a production server hosted in us-east.</p>
<blockquote>
<p>us-east.prod.a1ad1f1111a1111db111e1baab111e11.app.memory_rss.median</p>
</blockquote>
<p>So armed with <a href="https://github.com/etsy/statsd">statsd</a> and <a href="http://librato.com">Librato</a> I started adding monitoring to the
system, unfortunately this is where things went a bit pear shaped. Having been used to monitoring static
environments I was not prepared for the changing and in some ways obfuscated world of fully isolated process containers.</p>
<p>In these process containers things like hostnames and IP addresses change each time your service is moved within
the environment, this makes correlating metrics emitted from statsd very difficult. On the outside of Heroku we get a picture of dynos web.1 and web.2,
but inside the container your processes are unable to access these labels, the are essentially just anonymous process tasked with doing a particular job.</p>
<p>So this resulted in an endlessly growing list of metrics, every time your process is restarted, I would get a new hostname and a whole new group of metrics. One thing
to note is Heroku restarts your process at least once per day.</p>
<p>So how do you monitor stuff housed in Heroku? The answer to this is logging, the Heroku gives you the ability to provision
log drains, these endpoints are sent a copy of all messages written STDOUT and STDERR by your application. When configured the
logdrain transports your logs via either syslog or HTTP(S) to an external server, running in the same AWS as Heroku.
Most importantly, these logs include a context of which <code>dyno</code> is emitting the information. To read more about this feature take a look at <a href="https://devcenter.heroku.com/articles/logging#syslog-drains">Logplex</a> documentation.</p>
<p>When I started my new role one of my
coworkers mentioned that he was currently logging metrics using a feature called <code>log-runtime-metrics</code> available via the Heroku labs.
Initially I thought this was a bit limited, however on further examination this is actually an extremely reliable measure of the
processes resource utilisation. For more information on this feature <a href="https://devcenter.heroku.com/articles/log-runtime-metrics">Heroku Labs: log-runtime-metrics</a>.</p>
<p>So just for clarity this is what these runtime metrics look like, note the reference to the dyno in the source along with the metric name, value and units.</p>
<blockquote>
<p>source=heroku.2808254.web.1.d97d0ea7-cf3d-411b-b453-d2943a50b456 measure=memory_rss val=21.22 units=MB</p>
</blockquote>
<p>So now that I had metrics, with context in a stream coming in via logs I needed a way of getting this data to Librato having already had contact with <a href="https://twitter.com/nwekwerth">Nik Wekwerth</a>
I decided to ask whether he had any suggestions. After a few emails I was very fortunate enough to be passed onto the CTO <a href="https://twitter.com/josephruscio">Joseph Ruscio</a> who after a few questions
suggested I try out <a href="http://r.32k.io/l2met-introduction">l2met</a>, a project by <a href="https://twitter.com/ryandotsmith">Ryan Smith</a>
an engineer at Heroku. This service acts as a bridge between the two services filtering the metrics out of the log messages and send them off to librato at a configurable interval.</p>
<p>After learning a bit of <a href="http://golang.org/">golang</a> I was able to add support for the Heroku labs runtime metrics
feature and tune a few things for my requirements. This was helped along by some good high level tests and a few helpful tips from the author.</p>
<p>An example of my new log based metrics is as follows:</p>
<blockquote>
<p>us-east.prod.app.load_avg_5m.median</p>
</blockquote>
<p>The missing piece here is the <code>dyno</code>, fortunately l2met sends this in an optional <code>source</code> property enabling me to use aggregate,
or per <code>dyno</code> level metrics in my dashboard. For more information on the attributes supported by Librato see their
<a href="http://dev.librato.com/v1/metrics">API documentation</a>.</p>
<p>One bonus from this is that when dynos are added or removed the graphs in Librato automatically reflect this without requiring
any configuration.</p>
<p>Now some key take points on this:</p>
<ul>
<li>Although Heroku appears to hide a lot information from you at first, it does provide some very good instrumentation as long as you understand how to tap into it.</li>
<li>Logging is the primary source of information about the state of your hosted applications, use it as much as possible.</li>
<li><a href="http://codeascraft.com/2011/02/15/measure-anything-measure-everything/">Measure Anything, Measure Everything</a> have a read of this if you haven&rsquo;t before it is a great post.</li>
</ul>
<p>Note, I am still using statsd for some of my counters, this is due to the frequency they change, I am hoping to implement
a scheduled emitter for them in the future which I can pass through via logs. To remedy the amount of logging I am not
including the hostname in the metrics anymore and just acquiring the aggregate value across all processes.</p>
<p>I plan to follow this up with a post containing some notes on how I configured l2met, hopefully I can get this out in the next couple of weeks.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Installing Ruby 2.0.0 with Dtrace Support</title>
      <link>https://www.wolfe.id.au/2013/02/26/installing-ruby-2.0.0-with-dtrace-support/</link>
      <pubDate>Tue, 26 Feb 2013 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2013/02/26/installing-ruby-2.0.0-with-dtrace-support/</guid>
      <description>&lt;p&gt;The aim of this post is to guide the reader through the process of installing &lt;a href=&#34;http://www.ruby-lang.org/en/&#34;&gt;ruby 2.0.0&lt;/a&gt;
into &lt;a href=&#34;https://github.com/sstephenson/rbenv&#34;&gt;rbenv&lt;/a&gt; with dtrace probes enabled. As rbenv uses &lt;a href=&#34;https://github.com/sstephenson/ruby-build&#34;&gt;ruby-build&lt;/a&gt;,
which currently downloads and compiles a copy of &lt;a href=&#34;http://www.openssl.org/&#34;&gt;openssl&lt;/a&gt; rather than using the one
&lt;a href=&#34;http://mxcl.github.com/homebrew/&#34;&gt;homebrew&lt;/a&gt; i prefer to use the homebrew one.&lt;/p&gt;
&lt;p&gt;Note that you MUST install &lt;a href=&#34;https://developer.apple.com/xcode/&#34;&gt;xcode&lt;/a&gt; before installing anything, then install homebrew,
rbenv, and lastly openssl.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;ruby -e &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;$(&lt;/span&gt;curl -fsSL https://raw.github.com/mxcl/homebrew/go&lt;span class=&#34;k&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;brew install rbenv
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;brew install openssl&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Next to overcome the fact that OSX doesn&amp;rsquo;t have an openssl ca certificate bundle, use the following brew to create and
maintain one using the CA certs stored in your keychain.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The aim of this post is to guide the reader through the process of installing <a href="http://www.ruby-lang.org/en/">ruby 2.0.0</a>
into <a href="https://github.com/sstephenson/rbenv">rbenv</a> with dtrace probes enabled. As rbenv uses <a href="https://github.com/sstephenson/ruby-build">ruby-build</a>,
which currently downloads and compiles a copy of <a href="http://www.openssl.org/">openssl</a> rather than using the one
<a href="http://mxcl.github.com/homebrew/">homebrew</a> i prefer to use the homebrew one.</p>
<p>Note that you MUST install <a href="https://developer.apple.com/xcode/">xcode</a> before installing anything, then install homebrew,
rbenv, and lastly openssl.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ruby -e <span class="s2">&#34;</span><span class="k">$(</span>curl -fsSL https://raw.github.com/mxcl/homebrew/go<span class="k">)</span><span class="s2">&#34;</span>
</span></span><span class="line"><span class="cl">brew install rbenv
</span></span><span class="line"><span class="cl">brew install openssl</span></span></code></pre></div>
<p>Next to overcome the fact that OSX doesn&rsquo;t have an openssl ca certificate bundle, use the following brew to create and
maintain one using the CA certs stored in your keychain.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">brew tap raggi/ale <span class="o">&amp;&amp;</span> brew install openssl-osx-ca</span></span></code></pre></div>
<p>Make a temporary directory to build the sources in, download the 2.0.0-p0 and extract it into this location, then
navigate into the <code>ruby-2.0.0-p0</code> directory containing the sources.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">mkdir ~/temp <span class="o">&amp;&amp;</span> <span class="nb">cd</span> ~/temp
</span></span><span class="line"><span class="cl">curl -L ftp://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p0.tar.bz2 <span class="p">|</span> tar xjf -
</span></span><span class="line"><span class="cl"><span class="nb">cd</span> ruby-2.0.0-p0</span></span></code></pre></div>
<p>Run configure file with the arguments as listed below.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">./configure --prefix<span class="o">=</span><span class="nv">$HOME</span>/.rbenv/versions/2.0.0-p0 --enable-dtrace <span class="se">\
</span></span></span><span class="line"><span class="cl"><span class="se"></span>--with-opt-dir<span class="o">=</span><span class="sb">`</span>brew --prefix openssl<span class="sb">`</span></span></span></code></pre></div>
<p>Within this rather verbose output you should see the following, this indicates that dtrace has been included.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">checking whether dtrace USDT is available... yes</span></span></code></pre></div>
<p>Build ruby, note I am invoking make with the ‘-j’ or ‘&ndash;jobs’ option tells make to execute many recipes simultaneously,
in my case I chose 9 as I have eight cores (n of cores + 1).</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">make -j9</span></span></code></pre></div>
<p>Now install ruby into rbenv with the label 2.0.0-p0.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">make install</span></span></code></pre></div>
<p>To try it out we will alter our shell to use the 2.0.0-p0 version.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rbenv shell 2.0.0-p0</span></span></code></pre></div>
<p>Running <code>ruby -v</code> should output the following.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ruby 2.0.0p0 <span class="o">(</span>2013-02-24 revision 39474<span class="o">)</span> <span class="o">[</span>x86_64-darwin12.2.0<span class="o">]</span></span></span></code></pre></div>
<p>You can remove the temporary directory you built ruby in now.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rm -r ~/temp</span></span></code></pre></div>
<p>NOTE: Bundler has just been updated to cater for ruby 2.0.0 but you will need to retrieve the gem manually and install it as follows.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">wget https://rubygems.org/downloads/bundler-1.3.0.gem
</span></span><span class="line"><span class="cl">gem install bundler-1.3.0.gem
</span></span><span class="line"><span class="cl">rm bundler-1.3.0.gem</span></span></code></pre></div>
]]></content:encoded>
    </item>
    
    <item>
      <title>Raspberry Pi Temperature and Humidity Project Construction</title>
      <link>https://www.wolfe.id.au/2012/12/22/raspberry-pi-temperature-and-humidity-project-construction/</link>
      <pubDate>Sat, 22 Dec 2012 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2012/12/22/raspberry-pi-temperature-and-humidity-project-construction/</guid>
      <description>&lt;p&gt;For some time I have had a &lt;a href=&#34;http://www.raspberrypi.org/&#34;&gt;Raspberry Pi&lt;/a&gt; languishing on my desk, like many I bought one early on and played around with
it for a while and then got busy with other stuff. Recently however I have been looking into distributed sensor networks
which led me to consider how I could use the Raspberry Pi to aid in my research. If I could get a number of these devices
connected up together gathering some measurements of some sort, I could a) make some graphs, b) produce some useful
test data and c) get some real world experience with sensor networks.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For some time I have had a <a href="http://www.raspberrypi.org/">Raspberry Pi</a> languishing on my desk, like many I bought one early on and played around with
it for a while and then got busy with other stuff. Recently however I have been looking into distributed sensor networks
which led me to consider how I could use the Raspberry Pi to aid in my research. If I could get a number of these devices
connected up together gathering some measurements of some sort, I could a) make some graphs, b) produce some useful
test data and c) get some real world experience with sensor networks.</p>
<p>So I did a bit of research on what hardware I could connect up to the Raspberry Pi with the least amount of circuitry
this led me to the AdaFruit site and in particular this article <a href="http://learn.adafruit.com/dht-humidity-sensing-on-raspberry-pi-with-gdocs-logging/overview">DHT Humidity Sensing on Raspberry Pi with GDocs Logging</a>.
This was as almost what I was after but still had a little to much &ldquo;construction&rdquo;. After searching around I managed to
deduce that a small pre-made Arduino compatible board existed with all the circuitry already assembled,  all that remained was
to solder 3 wires onto the board and get this connected to the Raspberry Pi. After some foraging around in my computer
&ldquo;junk&rdquo; I found some old case wiring which had small 4 pin connectors which could be plugged into the gpio header and could
have their pins rejigged into any combination of one, two or 4 pin headers for maximum flexibility.</p>
<p>So to build this project you will need the following items, I have included the approximate cost.</p>
<ul>
<li>1 x Raspberry Pi ($42)</li>
<li>1 x 700mW power supply ($12)</li>
<li>1 x freetronics <a href="http://www.freetronics.com/humid">Humidity and Temperature Sensor Module</a> ($20)</li>
<li>2 x 4 pin connectors, as found in old PC cases or CD drive audio cables</li>
</ul>
<p>Total cost $74.</p>
<p>Assembly is quite simple.</p>
<p>Solder three wires into the freetronics board, in my case I had red which i soldered into the data pin, white which
I soldered into the 3.3v pin and black which I soldered into the ground pin.</p>
<p></p>
<p>Move the ground wire into it&rsquo;s own 4 pin connector, and put the 3.3v and data pins at either end of another 4 port connector.</p>
<p></p>
<p>Connect these as illustrated to the Raspberry Pi, being sure to triple check the location.</p>
<p></p>
<p>Bask in the glow of the little LED on the addon board which indicates you have powered it up.</p>
<p></p>
<p>First you will need to grab and install the bcm2835 library before building it. I grabbed the latest sources for
<a href="http://www.open.com.au/mikem/bcm2835/">Mike McCauley&rsquo;s bcm2835 library</a> and installed them on the pi.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">pi@raspberrypi ~ $ wget http://www.open.com.au/mikem/bcm2835/bcm2835-1.14.tar.gz
</span></span><span class="line"><span class="cl">pi@raspberrypi ~ $ tar xvzf bcm2835-1.14.tar.gz
</span></span><span class="line"><span class="cl">pi@raspberrypi ~ $ <span class="nb">cd</span> bcm2835-1.14
</span></span><span class="line"><span class="cl">pi@raspberrypi ~/bcm2835-1.14 $ ./configure
</span></span><span class="line"><span class="cl">pi@raspberrypi ~/bcm2835-1.14 $ make
</span></span><span class="line"><span class="cl">...
</span></span><span class="line"><span class="cl">pi@raspberrypi ~/bcm2835-1.14 $ sudo make install</span></span></code></pre></div>
<p>Download the software as instructed in the linked ADAFruit article, you will need git so install that first.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">pi@raspberrypi ~ $ apt-get install git
</span></span><span class="line"><span class="cl">pi@raspberrypi ~ $ git clone https://github.com/adafruit/Adafruit-Raspberry-Pi-Python-Code.git
</span></span><span class="line"><span class="cl">pi@raspberrypi ~ $ <span class="nb">cd</span> Adafruit-Raspberry-Pi-Python-Code/
</span></span><span class="line"><span class="cl">pi@raspberrypi ~/Adafruit-Raspberry-Pi-Python-Code $ <span class="nb">cd</span> Adafruit_DHT_Driver</span></span></code></pre></div>
<p>Build the software.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">pi@raspberrypi ~/Adafruit-Raspberry-Pi-Python-Code/Adafruit_DHT_Driver $ make</span></span></code></pre></div>
<p>Run the Adafruit_DHT command.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">pi@raspberrypi ~/Adafruit-Raspberry-Pi-Python-Code/Adafruit_DHT_Driver $ sudo ./Adafruit_DHT <span class="m">2302</span> <span class="m">4</span>
</span></span><span class="line"><span class="cl">Adafruit_DHT <span class="m">2302</span> <span class="m">4</span>
</span></span><span class="line"><span class="cl">Using pin <span class="c1">#4</span>
</span></span><span class="line"><span class="cl">Data <span class="o">(</span>40<span class="o">)</span>: 0x2 0x3e 0x0 0xde 0x1e
</span></span><span class="line"><span class="cl"><span class="nv">Temp</span> <span class="o">=</span>  22.2 *C, <span class="nv">Hum</span> <span class="o">=</span> 57.4 %</span></span></code></pre></div>
<p>I am working on another post with some details of how I am using this device, and the software I will be driving it with,
this should go out in the next couple of weeks.</p>
<p>Hope others find this useful.</p>
<h2 id="update">Update</h2>
<p><code>2013-02-28</code> Removed the change to small change to the make file as it is not required anymore.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>What is HMAC Authentication and why is it useful?</title>
      <link>https://www.wolfe.id.au/2012/10/20/what-is-hmac-authentication-and-why-is-it-useful/</link>
      <pubDate>Sat, 20 Oct 2012 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2012/10/20/what-is-hmac-authentication-and-why-is-it-useful/</guid>
      <description>&lt;p&gt;To start with a little background, then I will outline the options for authentication of HTTP based server APIs with a focus on
HMAC and lastly I will provide some tips for developers building and using &lt;a href=&#34;http://en.wikipedia.org/wiki/HMAC&#34;&gt;HMAC&lt;/a&gt; based
authentication.&lt;/p&gt;
&lt;p&gt;Recently I have been doing quite a bit of research and hacking in and around server APIs. Authentication for
these type APIs really depends on the type of service, and falls into a couple of general categories:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>To start with a little background, then I will outline the options for authentication of HTTP based server APIs with a focus on
HMAC and lastly I will provide some tips for developers building and using <a href="http://en.wikipedia.org/wiki/HMAC">HMAC</a> based
authentication.</p>
<p>Recently I have been doing quite a bit of research and hacking in and around server APIs. Authentication for
these type APIs really depends on the type of service, and falls into a couple of general categories:</p>
<ul>
<li>Consumer or personal applications, these typically use a simple username and password, OAuth is used in some cases however this
is more for identity of an individuals authorisation session within a trusted third party.</li>
<li>Infrastructure applications, these typically use a set of credentials which are different to the owners/admins credentials
and provide some sort of automation API for business or devices to enhance the function or control something.</li>
</ul>
<p>For infrastructure APIs I have had a look at a few options, these are explained in some detail below.</p>
<h2 id="basic-authentication">Basic Authentication</h2>
<p>This is the simplest to implement and for some implementations can work well, however it requires transport level
encryption as the user name and password are presented with ever request. For more information on this see
<a href="http://en.wikipedia.org/wiki/Basic_access_authentication">Wikipedia Article</a>.</p>
<h2 id="digest-authentication">Digest Authentication</h2>
<p>This is actually quite a bit closer to HMAC than basic, it uses md5 to hash the authentication attributes in a way which
makes it much more difficult to intercept and compromise the username and password attributes. Note I recommend reading over
the <a href="http://en.wikipedia.org/wiki/Digest_access_authentication">Wikipedia page</a> on the subject, in short it is more than
secure than basic auth, however it is entirely dependent on how many of the safeguards are implemented in the client
software and the complexity of the password is a factor.</p>
<p>Note unlike basic authentication, this does not require an SSL connection, that said make sure you read the Wikipedia article
as there are some issues with man in the middle attacks.</p>
<h2 id="hmac-authentication">HMAC Authentication</h2>
<p>Hash-based message authentication code (HMAC) is a mechanism for calculating a message authentication code involving a hash function
in combination with a secret key. This can be used to verify the integrity and authenticity of a a message.</p>
<p>Unlike the previous authentication methods there isn&rsquo;t, as far as I can tell a standard way to do this within HTTP, that said as this
is the main authentication method used by <a href="http://aws.amazon.com">Amazon Web Services</a> it is very well understood, and there are a number of
libraries which implement it. To use this form of authentication you utilise a key identifier and a secret key, with both
of these typically generated in an admin interface (more details below).</p>
<p>It is very important to note that one of the BIG difference with this type of authentication is it signs the entire request,
if the content-md5 is included, this basically guarantees the authenticity of the action. If a party in the middle fiddles
with the API call either for malicious reasons, or bug in a intermediary proxy that drops some important headers,the
signature will not match.</p>
<p>The use HMAC authentication a digest is computed using a composite of the URI, request timestamp and some other headers (dependeing
on the implementation) using the supplied secret key. The key identifier along with the digest, which is encoded using <a href="http://en.wikipedia.org/wiki/Base64">Base64</a>
is combined and added to the authorisation header.</p>
<p>The following example is from <a href="http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html">Amazon S3 documentation</a>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ruby" data-lang="ruby"><span class="line"><span class="cl"><span class="s2">&#34;Authorization: AWS &#34;</span> <span class="o">+</span> <span class="no">AWSAccessKeyId</span> <span class="o">+</span> <span class="s2">&#34;:&#34;</span>  <span class="o">+</span> <span class="n">base64</span><span class="p">(</span><span class="n">hmac</span><span class="o">-</span><span class="n">sha1</span><span class="p">(</span><span class="no">VERB</span> <span class="o">+</span> <span class="s2">&#34;</span><span class="se">\n</span><span class="s2">&#34;</span>
</span></span><span class="line"><span class="cl">                   <span class="o">+</span> <span class="no">CONTENT</span><span class="o">-</span><span class="no">MD5</span> <span class="o">+</span> <span class="s2">&#34;</span><span class="se">\n</span><span class="s2">&#34;</span>
</span></span><span class="line"><span class="cl">                   <span class="o">+</span> <span class="no">CONTENT</span><span class="o">-</span><span class="no">TYPE</span> <span class="o">+</span> <span class="s2">&#34;</span><span class="se">\n</span><span class="s2">&#34;</span>
</span></span><span class="line"><span class="cl">                   <span class="o">+</span> <span class="no">DATE</span> <span class="o">+</span> <span class="s2">&#34;</span><span class="se">\n</span><span class="s2">&#34;</span>
</span></span><span class="line"><span class="cl">                   <span class="o">+</span> <span class="no">CanonicalizedAmzHeaders</span> <span class="o">+</span> <span class="s2">&#34;</span><span class="se">\n</span><span class="s2">&#34;</span>
</span></span><span class="line"><span class="cl">                   <span class="o">+</span> <span class="no">CanonicalizedResource</span><span class="p">))</span>
</span></span></code></pre></div><p>Which results in a HTTP request, with headers which looks like this.</p>
<pre tabindex="0"><code>PUT /quotes/nelson HTTP/1.0
Authorization: AWS 44CF9590006BF252F707:jZNOcbfWmD/A/f3hSvVzXZjM2HU=
Content-Md5: c8fdb181845a4ca6b8fec737b3581d76
Content-Type: text/html
Date: Thu, 17 Nov 2005 18:49:58 GMT
X-Amz-Meta-Author: foo@bar.com
X-Amz-Magic: abracadabra
</code></pre><p>Note the AWS after the colon is sometimes known as the service label, most services I have seen follow the convention of changing this to
an abbreviation of their name or just HMAC.</p>
<p>If we examine the Amazon implementation closely a few advantages become obvious, over normal user names and passwords:</p>
<ol>
<li>As mentioned HMAC authentication guarantees the authenticity of the request by signing the headers, this is especially
the case if content-md5 is signed and checked by the server AND the client.</li>
<li>An admin can generate any number of key pairs and utilise them independent of their Amazon credentials.</li>
<li>As noted before these are computed values and can be optimised to be as large as necessary, Amazon is using 40 character secrets for <a href="http://en.wikipedia.org/wiki/SHA-1">SHA-1</a>,
depending on the hash algorithm used.</li>
<li>This form of authentication can be used without the need for SSL as the secret is never actually transmitted, just the MAC.</li>
<li>As the key pairs are independent of admin credentials they can be deleted or disabled when systems are compromised therefor
disabling their use.</li>
</ol>
<p>As far as disadvantages, there are indeed some:</p>
<ol>
<li>Not a lot of consistency in the implementations outside of the ones which interface with Amazon.</li>
<li>Server side implementations are few in number, and also very inconsistent.</li>
<li>If you do decide to build your own be advised Cryptographic APIs like OpenSSL can be hard to those who haven&rsquo;t used them directly before, a single character difference will result in a completely different value.</li>
<li>In cases where all headers within a request are signed you need to be VERY careful at the server or client
side to avoid headers being injected or modified by your libraries (more details below).</li>
</ol>
<hr>
<p>As I am currently developing, and indeed rewriting some of my existing implementations I thought I would put together a
list of tips for library authors.</p>
<ol>
<li>When writing the API ensure you check your request on the wire to ensure nothing has been changed or &ldquo;tweaked&rdquo; by the HTTP
library you&rsquo;re using, mine added a character encoding attribute to the Content-Type.</li>
<li>Test that order of your headers is correct on dispatch of the request as well, libraries my use an hash map (natural ordered),
this may break your signature depending on the implementation. In the case of Amazon they require you to sort your &ldquo;extra&rdquo;
headers alphabetically and lower case the header names before computing the signature.</li>
<li>Be careful of crazy Ruby libraries that snake case your header names (yes this is bad form) before presenting them
to your code as the list of header names.</li>
<li>When debugging print the canonical string used to generate the signature, preferably using something like
ruby inspect which shows ALL characters. This will help both debugging while developing, and to compare against what the server
side actually relieves.</li>
<li>Observe how various client or server APIs introduce or indeed remove headers.</li>
</ol>
<p>From a security stand point a couple of basic recommendations.</p>
<ol>
<li>Use content MD5 at both ends of the conversation.</li>
<li>Sign all headers which could influence the result of the operation as a minimum.</li>
<li>Record the headers of every API call that may have side affects, on most web servers this can be enabled and added to
the web logs (again ideally this would be encoded like what ruby inspect does).</li>
</ol>
<p>So in closing I certainly recommend using HMAC authentication, but be prepared to learn a lot about how HTTP works and a
little Cryptography, this in my view cant hurt either way if you&rsquo;re building server side APIs.</p>
<h2 id="update">Update</h2>
<p>Based on some of the comments made when I submitted this to <a href="http://news.ycombinator.com/item?id=4676676">hacker news</a> I have
compiled some extra links and observations.</p>
<p>One interesting point made was the issue of replay attacks, which is where a valid message is maliciously or fraudulently repeated or delayed. This is
either performed by the originator or by a man in the middle who retransmits the message, possibly as a part of a denial of service.</p>
<h2 id="nonce">nonce</h2>
<p>To protect from these types of attacks a <a href="http://en.wikipedia.org/wiki/Cryptographic_nonce">Cryptographic nonce</a>, which is an arbitrary number usable only
once in a message exchange between a client and a server. These are in fact optionally used within Digest authentication mentioned previously.</p>
<p>One of the comments linked an article which suggested use of nonce with HMAC as described in <a href="http://tools.ietf.org/html/rfc5849#page-17">RFC 5849 The OAuth 1.0 Protocol</a>. In this
specification an nonce is paired a timestamp and included with each message, the timestamp can be used to avoid the need to retain an infinite number of nonce values for
future checks, the server can reject messages with timestamps older than window of time nonce values are retained.</p>
<p>Based on the original post I have developed a flexible hmac authentication library called <a href="https://github.com/wolfeidau/ofuda">Ofuda</a> for <a href="http://nodejs.org">nodejs</a>, this currently contains
a small routine to hmac sign a list of headers for a given request. In the near future I plan to add validation of a signature and an implementation of nonce based on the
aforementioned strategy.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Building Ruby Projects with Bundler and the Bamboo Ruby Plugin</title>
      <link>https://www.wolfe.id.au/2012/06/20/building-ruby-projects-with-bundler-and-the-bamboo-ruby-plugin/</link>
      <pubDate>Wed, 20 Jun 2012 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2012/06/20/building-ruby-projects-with-bundler-and-the-bamboo-ruby-plugin/</guid>
      <description>&lt;p&gt;The latest release of the Ruby &lt;a href=&#34;https://marketplace.atlassian.com/plugins/au.id.wolfe.bamboo.rake-bamboo-plugin&#34;&gt;plugin&lt;/a&gt; I develop for &lt;a href=&#34;http://atlassian.com/bamboo&#34;&gt;Atlassian Bamboo&lt;/a&gt; now
includes some new configuration options for &lt;a href=&#34;http://gembundler.com/&#34;&gt;Bundler&lt;/a&gt; along with a number
of other additions and improvements. In this post I want to focus on the
new options available in the Bundler task, and illustrate how they are
used to make &lt;a href=&#34;http://www.ruby-lang.org/en/&#34;&gt;Ruby&lt;/a&gt; builds simpler.&lt;/p&gt;
&lt;p&gt;In the past with my plugin the administrator of the CI server had two
options when managing the gems associated with a build:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The latest release of the Ruby <a href="https://marketplace.atlassian.com/plugins/au.id.wolfe.bamboo.rake-bamboo-plugin">plugin</a> I develop for <a href="http://atlassian.com/bamboo">Atlassian Bamboo</a> now
includes some new configuration options for <a href="http://gembundler.com/">Bundler</a> along with a number
of other additions and improvements. In this post I want to focus on the
new options available in the Bundler task, and illustrate how they are
used to make <a href="http://www.ruby-lang.org/en/">Ruby</a> builds simpler.</p>
<p>In the past with my plugin the administrator of the CI server had two
options when managing the gems associated with a build:</p>
<ol>
<li>Install all the gems required by the project prior to performing a build</li>
<li>Permit bundler to write to the local ruby installations gem directory</li>
</ol>
<p>In my view neither of these options is ideal, so I decided I would do some
research into staging gems within the working copy of a build. The aim
here was to build a ruby project without tainting the local ruby
installation, which is one of  the key objectives of CI. After some reading
I discovered that Bundler could help stage my gems, therefore saving me
from doing so.</p>
<p>To take advantage of this feature in Bundler the user was
required to pass a couple of switches to install command and
then run Bundler&rsquo;s exec command for all subsequent ruby executions.</p>
<p>To illustrate this using the Bamboo Ruby plugin I will run through
configuring a build with a simple rails project.</p>
<p>Firstly we need a ruby,
which I install using <a href="rvm.beginrescueend.com/">RVM</a>, you could just use the system provided one if
your on OSX.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ rvm install 1.9.3</span></span></code></pre></div>
<p>Then use this install.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ rvm use 1.9.3</span></span></code></pre></div>
<p>Now the only gem you need to install is bundler.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ gem install bundler</span></span></code></pre></div>
<p>Now within Bamboo install the Bamboo Ruby Plugin via the Universal
Plugin Manager.</p>
<p></p>
<p>Now go to Server Capabilities in Administration section, and click the
Detect Server Capabilities button, this should find your ruby
installation as seen below.</p>
<p></p>
<p>Next setup a project and a release plan.</p>
<p></p>
<p>Then within your build Job add the Bundler Task, configuring the path
option to vendor/bundle and ticking the binstubs option.</p>
<p></p>
<p>Next add a Rake task to run your database migrations.</p>
<p></p>
<p>Next we want to run the tests in our project, in my case I am using
RSpec2 so I use the spec rake task.</p>
<p></p>
<p>As I like to see my test outputs in the CI server I have enabled xml
output with my RSpec configuration file and added the <a href="https://github.com/sj26/rspec_junit_formatter">rspec_junit_formatter</a>
gem to my Gemfile. This produces a JUnit XML report file named
rspec.xml which Bamboo parses into test results report.</p>
<p></p>
<p>Now you should enable your build and run it, all going well you should
have green, otherwise have a look at the build log and see what the
issue was.</p>
<p>So that wraps up my demonstration of Bamboo Ruby Plugin using the
awesome bundler gem to enable simple ruby build environments.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Bundler gems and binstubs</title>
      <link>https://www.wolfe.id.au/2012/06/11/bundler-gems-and-binstubs/</link>
      <pubDate>Mon, 11 Jun 2012 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2012/06/11/bundler-gems-and-binstubs/</guid>
      <description>&lt;p&gt;I have been working on an update of my &lt;a href=&#34;http://www.atlassian.com/bamboo/&#34;&gt;Bamboo&lt;/a&gt; ruby plugin which uses
&lt;a href=&#34;http://gembundler.com/&#34;&gt;bundler&lt;/a&gt; to install all the gems for a given project within the working
copy of the project and then run rake using these gems.&lt;/p&gt;
&lt;p&gt;The aim of this post is to illustrate how this is done and how to craft
an environment to run ruby once gems are &amp;ldquo;staged&amp;rdquo; within a working copy.&lt;/p&gt;
&lt;p&gt;The aim of this post is to illustrate how a rails project is staged
using bundler without installing any gems in the base ruby installation.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working on an update of my <a href="http://www.atlassian.com/bamboo/">Bamboo</a> ruby plugin which uses
<a href="http://gembundler.com/">bundler</a> to install all the gems for a given project within the working
copy of the project and then run rake using these gems.</p>
<p>The aim of this post is to illustrate how this is done and how to craft
an environment to run ruby once gems are &ldquo;staged&rdquo; within a working copy.</p>
<p>The aim of this post is to illustrate how a rails project is staged
using bundler without installing any gems in the base ruby installation.</p>
<p>Firstly I will create a new rails project in an environment similar to that
used when developing rails applications. I will be using an installation of
ruby with bundler, rake and rails already installed, note I am passing the
-T switch as I want to setup an alternate test framework.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rails new somenewrailsproj -T</span></span></code></pre></div>
<p>Once created I navigate into the project and setup the test framework I am
intending to use which is <a href="http://github.com/rspec/rspec-rails">Rspec-2 for rails</a>.</p>
<p>Add the following code to the end of the Gemfile.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ruby" data-lang="ruby"><span class="line"><span class="cl"><span class="n">group</span> <span class="ss">:test</span><span class="p">,</span> <span class="ss">:development</span> <span class="k">do</span>
</span></span><span class="line"><span class="cl">    <span class="n">gem</span> <span class="s2">&#34;rspec-rails&#34;</span><span class="p">,</span> <span class="s2">&#34;~&gt; 2.0&#34;</span>
</span></span><span class="line"><span class="cl"><span class="k">end</span></span></span></code></pre></div>
<p>Run bundle install.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">bundle install</span></span></code></pre></div>
<p>Run the Rspec-2 generator to plugin into the rails project.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rails generate rspec:install</span></span></code></pre></div>
<p>Scaffold a sample model.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rails generate scaffold post title body:text published:boolean</span></span></code></pre></div>
<p>Migrate this to the database.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rake db:migrate</span></span></code></pre></div>
<p>Remove the pending specs as their not important</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rm ./spec/helpers/posts_helper_spec.rb ./spec/models/post_spec.rb</span></span></code></pre></div>
<p>Run Rspec and we should be all green.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">bundle <span class="nb">exec</span> rake spec</span></span></code></pre></div>
<p>Now to complete isolate our test environment we need a clean bash shell,
to do this we run the following.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">env -i bash</span></span></code></pre></div>
<p>Next we export the PATH variable, in my case I only want my specific
ruby version, and indeed the only thing I want is this versions bins.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="nb">export</span> <span class="nv">PATH</span><span class="o">=</span>/Users/markw/.rbenv/versions/1.9.2-p320/bin</span></span></code></pre></div>
<p>This version ruby has just been installed and therefore only has the base set of
gems</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ gem list
</span></span><span class="line"><span class="cl">minitest <span class="o">(</span>1.6.0<span class="o">)</span>
</span></span><span class="line"><span class="cl">rake <span class="o">(</span>0.8.7<span class="o">)</span>
</span></span><span class="line"><span class="cl">rdoc <span class="o">(</span>2.5.8<span class="o">)</span></span></span></code></pre></div>
<p>To simulate a build server the only gem we will add to this installation is bundler.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">gem install bundler</span></span></code></pre></div>
<p>Now we run gem adding the vendor/bundle/ruby/1.9.1 directory to the
GEM_PATH this shows the gems we had previously bundled.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="nv">GEM_PATH</span><span class="o">=</span>vendor/bundle/ruby/1.9.1 gem list</span></span></code></pre></div>
<p>Now to illustrate how this will work inside a build environment we need augment our path a little, this will ensure gem install can find tools like compilers and such if required.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="nb">export</span> <span class="nv">PATH</span><span class="o">=</span>/Users/markw/.rbenv/versions/1.9.2-p320/bin:/bin:/sbin:/usr/bin:/usr/sbin</span></span></code></pre></div>
<p>Now run bundle install to recreate vendor/bundle and bin</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">bundle install --path vendor/bundle --binstubs</span></span></code></pre></div>
<p>Now run our specs with the augmented GEM_PATH.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ <span class="nv">GEM_PATH</span><span class="o">=</span>vendor/bundle/ruby/1.9.1 bundle <span class="nb">exec</span> rake spec                             
</span></span><span class="line"><span class="cl">...
</span></span><span class="line"><span class="cl">Finished in 0.32614 seconds
</span></span><span class="line"><span class="cl"><span class="m">28</span> examples, <span class="m">0</span> failures</span></span></code></pre></div>
<p>So I have illustrated how I can stage gems for a rails application
and run it&rsquo;s tests without installing anything in the base ruby. This
should work for any gem or project which uses bundler.</p>
<p>Some points to consider about this approach are:</p>
<ul>
<li>One should note that ruby has a notion of STD Library API compatibility which is reflected in the ruby/1.9.1 section of the path, this may vary for each release.</li>
<li>For commercial projects I would recommend using a frozen set of packaged and quality assured gems and running tests with just this set.</li>
</ul>
<p>That said for things like <a href="http://octopress.org/">Octopress</a> projects and gems / projects used
internally this is quite a flexible way to run a CI build.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Less is more especially when it comes to CSS</title>
      <link>https://www.wolfe.id.au/2012/01/09/less-is-more-especially-when-it-comes-to-css/</link>
      <pubDate>Mon, 09 Jan 2012 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2012/01/09/less-is-more-especially-when-it-comes-to-css/</guid>
      <description>&lt;p&gt;I am currently working on new design for my site using HTML5, CSS and a sprinkling of JavaScript. Once I started building my basic design I was re-acquainted with a process that really, really annoys me; the constant tweak refresh loop associated with developing a new site layout. So in true yak shaving sysadmin fashion I got side tracked looking for a solution to this problem.&lt;/p&gt;
&lt;p&gt;After considering the issue for a bit it dawned on me that this may be a good excuse to try out &lt;a href=&#34;http://lesscss.org&#34; title=&#34;LessCSS Website&#34;&gt;LessCSS&lt;/a&gt; JavaScript library. When incorporated into a website this library enables the developer to use a CSS like markup which significantly reduce the amount of duplication and redundancy in the style sheet. The markup is processed on the client using JavaScript and has an API to mess around with how the styles are loaded.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am currently working on new design for my site using HTML5, CSS and a sprinkling of JavaScript. Once I started building my basic design I was re-acquainted with a process that really, really annoys me; the constant tweak refresh loop associated with developing a new site layout. So in true yak shaving sysadmin fashion I got side tracked looking for a solution to this problem.</p>
<p>After considering the issue for a bit it dawned on me that this may be a good excuse to try out <a href="http://lesscss.org" title="LessCSS Website">LessCSS</a> JavaScript library. When incorporated into a website this library enables the developer to use a CSS like markup which significantly reduce the amount of duplication and redundancy in the style sheet. The markup is processed on the client using JavaScript and has an API to mess around with how the styles are loaded.</p>
<p>Initially I planned to invoke the reload function in less library using a timer, however after reading the docs I found this feature was already built into LessCSS. Simply add <em>#!watch</em> to the URL in the browser and LessCSS will poll the style sheet for updates.</p>
<p>On a wide screen monitor this means I can tweak the CSS and watch the changes appear, which in turn removes quite a bit of keyboard gymnastics while working on a design.</p>
<p>To take advantage of these features in your site simply add the following fragment to your html page.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-html" data-lang="html"><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">link</span> <span class="na">rel</span><span class="o">=</span><span class="s">&#34;stylesheet/less&#34;</span> <span class="na">type</span><span class="o">=</span><span class="s">&#34;text/css&#34;</span> <span class="na">href</span><span class="o">=</span><span class="s">&#34;css/styles.less&#34;</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">script</span> <span class="na">src</span><span class="o">=</span><span class="s">&#34;js/libs/less.js&#34;</span> <span class="na">type</span><span class="o">=</span><span class="s">&#34;text/javascript&#34;</span><span class="p">&gt;&lt;/</span><span class="nt">script</span><span class="p">&gt;</span></span></span></code></pre></div>
<p>Then move all your styles to <em>css/styles.less</em> within your mockup/site and reload the page. To enable auto reload of the <em>styles.less</em> file append #!watch to the URL and refresh the page.</p>
<p>One thing to note is you will need to serve the site using a web server of some sort otherwise you will get XHR issues, to do this on OSX I use a python one liner.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ python -m SimpleHTTPServer</span></span></code></pre></div>
<p>To illustrate this feature I have created a sample project up on github <a href="https://github.com/wolfeidau/lesscss_watch_example_site" title="Sample LessCSS Project">lesscss_watch_example_site</a>.</p>
<p>I am currently working on a site which will also live reload page fragments using <a href="http://emberjs.com/" title="Ember Website">Ember</a> and <a href="http://handlebarsjs.com/" title="Handlebars Website">Handlebars</a>.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Tips for Bamboo Plugin Developers</title>
      <link>https://www.wolfe.id.au/2011/12/07/tips-for-bamboo-plugin-developers/</link>
      <pubDate>Wed, 07 Dec 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/12/07/tips-for-bamboo-plugin-developers/</guid>
      <description>&lt;p&gt;Having recently developed a plugin (&lt;a href=&#34;https://plugins.atlassian.com/plugin/details/770964&#34;&gt;Ruby Rake Plugin&lt;/a&gt;) for Atlassian&amp;rsquo;s &lt;a href=&#34;http://www.atlassian.com/software/bamboo/overview&#34;&gt;Bamboo&lt;/a&gt; continuous integration (CI) server I thought I would put together a list of tips for those looking to do the same. As there are some great documents provided by Atlassian on how to get started with plugin development I am not going to go into a lot of detail in this area, it is assumed you already messed around a bit with the &lt;a href=&#34;https://developer.atlassian.com/display/DOCS/Developer&amp;#43;Quick&amp;#43;Start&#34;&gt;Plugin SDK&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Having recently developed a plugin (<a href="https://plugins.atlassian.com/plugin/details/770964">Ruby Rake Plugin</a>) for Atlassian&rsquo;s <a href="http://www.atlassian.com/software/bamboo/overview">Bamboo</a> continuous integration (CI) server I thought I would put together a list of tips for those looking to do the same. As there are some great documents provided by Atlassian on how to get started with plugin development I am not going to go into a lot of detail in this area, it is assumed you already messed around a bit with the <a href="https://developer.atlassian.com/display/DOCS/Developer&#43;Quick&#43;Start">Plugin SDK</a>.</p>
<p>So to start with there are some things you should learn before beginning:</p>
<h1 id="maven">Maven</h1>
<p>The entire plugin development kit revolves around it so you need to understand it, have a read over the <a href="http://www.sonatype.com/books/mvnref-book/reference/">Maven Reference</a> and add it to your bookmarks.</p>
<p>The first thing I do when coming back to maven is practice the release process, for most developers this is one of the most frustrating and complicated areas of maven so practice it.</p>
<p>Generate a test Java project using the basic archetype, and push it up to your version control site of choice, either <a href="http://bitbucket.org/">bitbucket</a> or <a href="http://github.com">github</a> is fine, and work through the development cycle. Make a few changes check them in and then perform a release, this process normally takes me a few goes to get all the settings right in your maven project.</p>
<p>I recommend you use this approach to re-familiarise yourself with the release process after any long breaks as well, this will ensure maven hasn&rsquo;t change since you last did it, and you don&rsquo;t make a mess of your plugin project.</p>
<p>Once you have created your plugin project ensure you fill out all the relevant meta information in your maven pom file, as seen in the sample below. In addition to it being a good practice to do so this information can be used by maven plugins you may include in your project in the future.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;description&gt;</span>This is the ruby rake plugin for Atlassian Bamboo.<span class="nt">&lt;/description&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;organization&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;name&gt;</span>Mark Wolfe<span class="nt">&lt;/name&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;url&gt;</span>http://www.wolfe.id.au/<span class="nt">&lt;/url&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/organization&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;developers&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;developer&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;name&gt;</span>Mark Wolfe<span class="nt">&lt;/name&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;email&gt;</span>mark@wolfe.id.au<span class="nt">&lt;/email&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/developer&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/developers&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;licenses&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;license&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;name&gt;</span>Apache 2<span class="nt">&lt;/name&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;url&gt;</span>http://www.apache.org/licenses/LICENSE-2.0.txt<span class="nt">&lt;/url&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;distribution&gt;</span>repo<span class="nt">&lt;/distribution&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;comments&gt;</span>A business-friendly OSS license<span class="nt">&lt;/comments&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/license&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/licenses&gt;</span>
</span></span></code></pre></div><h1 id="the-java-ecosystem">The Java Ecosystem</h1>
<p>A small part of this ecosystem is downloaded to your system when you run maven to build your plugin, so I recommend you do a little bit of reading on some of them, the ones I have listed below are a few of my favourites. I myself are a big proponent of the old saying &ldquo;When in Rome, do as the Romans do&rdquo;, for this reason I will always try and use the libraries which are already in the SDK.</p>
<ul>
<li><a href="http://code.google.com/p/guava-libraries/">Google Collections</a>, in my view one of the core libraries which a Java developer should know.</li>
<li><a href="http://www.slf4j.org/">SLF4J</a>, one of the many logging abstractions which are used in Java projects but the one I tend to prefer.</li>
<li><a href="http://commons.apache.org/lang/">Apache Commons Lang</a>, this library has quite a few utility classes for manipulating strings as well as builders for toString and equals methods in classes.</li>
<li><a href="http://www.springframework.org">Spring Framework</a>, most of the Atlassian products are built using this dependency injection framework so it is handy to understand a bit of how this works.</li>
<li><a href="http://junit.org">JUnit</a>, this unit testing framework has been around for a long time for good reason, learn how to use it.</li>
<li><a href="http://code.google.com/p/mockito/">Mockito</a>, because mocking is a BIG must when building something in a large application so learn this API and ensure it is included in your plugin project from the start.</li>
</ul>
<h1 id="development-process">Development process</h1>
<p>This is one of the areas which is often left up to the developers themselves to manage so for this reason I typically follow a simple process, especially when I am working on open source projects.</p>
<ol>
<li>Before you start write down what you want to achieve, keep things simple and don&rsquo;t plan world domination at this stage.</li>
<li>Build a first release focusing on the goals more than the method, the goal is to prove the concept and most important <em>ship it</em>.</li>
<li>Do some research now that you know what you looking for, read other peoples code and hack on your initial release a bit.</li>
<li>Delete your code, start the whole thing again, this sounds nuts but your proof of concept code is probably best left behind (see <a href="http://coderetreat.com/">Corey Haines Code Retreat</a>).</li>
<li>Build a new release from scratch with more of a focus on structure, testing and extensibility, and again <em>ship it</em>.</li>
</ol>
]]></content:encoded>
    </item>
    
    <item>
      <title>Monitoring the OpenJDK from the CLI</title>
      <link>https://www.wolfe.id.au/2011/10/16/monitoring-the-openjdk-from-the-cli/</link>
      <pubDate>Sun, 16 Oct 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/10/16/monitoring-the-openjdk-from-the-cli/</guid>
      <description>&lt;p&gt;Currently I do quite a bit of work in and around the &lt;a href=&#34;http://openjdk.java.net/&#34;&gt;Java virtual machine&lt;/a&gt; (JVM), most of the time on Linux. When things go awry and I am trying to establish why, I reach for the Java performance analysis tools. These tools come in two forms, the wonderful GUI known as &lt;a href=&#34;http://visualvm.java.net/&#34;&gt;visualvm&lt;/a&gt;, which I use when I am working on my local machine, and the cli tools packaged with the Java Development Kit (JDK), which I use when working remotely.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Currently I do quite a bit of work in and around the <a href="http://openjdk.java.net/">Java virtual machine</a> (JVM), most of the time on Linux. When things go awry and I am trying to establish why, I reach for the Java performance analysis tools. These tools come in two forms, the wonderful GUI known as <a href="http://visualvm.java.net/">visualvm</a>, which I use when I am working on my local machine, and the cli tools packaged with the Java Development Kit (JDK), which I use when working remotely.</p>
<p>The CLI tools I am referring to are:</p>
<ul>
<li><a href="http://download.oracle.com/javase/6/docs/technotes/tools/share/jps.html">jps - JVM Process Status Tool</a></li>
<li><a href="http://download.oracle.com/javase/6/docs/technotes/tools/share/jstat.html">jstat - JVM Statistics Monitoring Tool</a></li>
<li><a href="http://download.oracle.com/javase/6/docs/technotes/tools/share/jhat.html">jhat - Java Heap Analysis Tool</a></li>
<li><a href="http://download.oracle.com/javase/6/docs/technotes/tools/share/jstack.html">jstack - Java Stack Trace Tool</a></li>
</ul>
<p>The tools I use most commonly are jps, jstat and jstack, the jhat tool is also very handy but really needs an entire blog post to itself as it crazy what you can do with it. In this post I have put together some tips, observations and sample outputs to illustrate how I use them.</p>
<p>As I am using <a href="http://www.ubuntu.com/">ubuntu</a> 11.10, which only installs the Java runtime environment (JRE) I will need to install the JDK. In my case I decided to give openjdk 7 a shot, but version 6 would work just fine.</p>
<pre tabindex="0"><code>root@oneric:~# apt-get install openjdk-7-jdk
</code></pre><p>To try out these commands I have installed tomcat7 this can be done through apt on ubuntu, again the previous version being tomcat 6 would be fine.</p>
<pre tabindex="0"><code>root@oneric:~# apt-get install tomcat7
</code></pre><p>Now that I have tomcat installed I want to list the Java processes, note that it is best to assume the same user account as the service when doing this. On ubuntu I would su to the user account, as the tomcat7 user is a system account I have to override the shell as it is <em>/bin/nologin</em> by default, I can then run jps as this user.</p>
<p>The <em>jps</em> command outputs the PID of the java process along with the main class name and the argument(s) passed to it on startup.</p>
<pre tabindex="0"><code>root@oneric:~# su - tomcat7 -s /bin/bash
tomcat7@oneric:~$ jps -ml
12728 org.apache.catalina.startup.Bootstrap start
13926 sun.tools.jps.Jps -ml
tomcat7@oneric:~$
</code></pre><p>Now that we have the PID of these processes we can run jstat, the first switch I use is <em>-gcutil</em> this gives me an overview of the heap use within the jvm. In cases where there are pauses or performance degradation I will look at the last two columns. These contain the garbage collection time (GCT) and full garbage collection time (FGCT). If the FGCT column is increasing every second then it is likely we have an issue.</p>
<p>The following example I am running <em>jstat</em> against the PID of tomcat. I have also instructed the command to display the table headers every 20 rows and print the statistics continuously with an interval of 1000 milliseconds, as normal control C with end the output.</p>
<p>This sample shows a newly started tomcat 7 with very little happening, this is clear from the values in the full garbage collection time(FGCT) and garbage collection time(GCT) columns.</p>
<p>Also of note is the permgen space (P) which is currently sitting at 70%. The permgen space is an important area of the heap as it holds user classes, method names and internal jvm objects. If you have used tomcat for a while you will have seen the <em>java.lang.OutOfMemoryError: PermGen space</em> error wich indicates when this space fills up and cannot be garbage collected. This frequently happened when redeploying large web applications.</p>
<p>Also in the sample we can see that the Survivor 0 (S0), Survivor 1 (S1), Eden and Old spaces have quite a bit of free space which is good.</p>
<pre tabindex="0"><code>tomcat7@oneric:~$ jstat -gcutil -h20 12728 1000
  S0     S1     E      O      P     YGC     YGCT    FGC    FGCT     GCT
  0.00  17.90  32.12   4.81  71.41      5    0.009     1    0.023    0.032
  0.00  17.90  32.12   4.81  71.41      5    0.009     1    0.023    0.032
  0.00  17.90  32.12   4.81  71.41      5    0.009     1    0.023    0.032
</code></pre><p>To illustrate what a tomcat under load looks like in comparison we can install a tool called Apache bench.</p>
<pre tabindex="0"><code>root@oneric:~# apt-get install apache2-utils
</code></pre><p>And run the following command to hit the base page with a large number of requests concurrently.</p>
<pre tabindex="0"><code>markw@oneric:~$ ab -n 1000000 -c 100 http://localhost:8080/
</code></pre><p>Below is the output after this test was run for a bit, as we can see there has been considerable growth of the survivor 1, eden and old space, however the server hasn&rsquo;t spent a lot of time doing full garbage collects as indicated by the value of the full garbage collection count(FGC) which is only 10, most of the work is in the young generation as seen by the increase in the young generation collection count (YGC).</p>
<p>Also to note here is that there wasn&rsquo;t a lot of change in the permgen space, it actually went down, this was due to an increase in size of heap.</p>
<pre tabindex="0"><code>tomcat7@oneric:~$ jstat -gcutil -h20 12728 1000
  S0     S1     E      O      P     YGC     YGCT    FGC    FGCT     GCT
  0.00 100.00  52.02  81.84  59.62    117    1.176    10    0.074    1.250
  0.00 100.00  52.02  81.84  59.62    117    1.176    10    0.074    1.250
  0.00 100.00  52.02  81.84  59.62    117    1.176    10    0.074    1.250
  0.00 100.00  52.02  81.84  59.62    117    1.176    10    0.074    1.250
</code></pre><p>To look deeper into the cause of garbage collection we use the <em>jstat</em> command with the <em>-gccause</em> option, this displays the same columns as the previous command but with two extras which supply the reasons for GC.</p>
<p>In the following example we can see an example of an allocation failure, this indicates that a full gc is being performed because the heap is too small.</p>
<pre tabindex="0"><code>tomcat7@oneric:~$ jstat -gccause -h20 12728 1000
100.00   0.00   0.00  78.91  59.67    168    1.680    14    0.083    1.763 unknown GCCause      No GC
100.00   0.00  72.61  83.73  59.67    170    1.698    14    0.083    1.781 unknown GCCause      No GC
  0.00 100.00  46.24  91.83  59.67    173    1.729    14    0.083    1.811 unknown GCCause      No GC
100.00   0.00  11.39  29.80  59.67    176    1.759    16    0.086    1.846 unknown GCCause      No GC
100.00   0.00  92.41  35.30  59.67    179    1.777    16    0.086    1.864 unknown GCCause      Allocation Failure
  0.00 100.00  62.58  43.05  59.67    181    1.803    16    0.086    1.889 unknown GCCause      No GC
</code></pre><p>Another area which I like to look into when diagnosing performance issues is the threads running in the vm. This can help me undertand if any component is overloaded and therefore operating a lot of threads trying to catch up. This is mostly only applicable to async processes like messaging, or scheduling routines.</p>
<p>To dump a list of threads and their current stack us the <em>jstack</em> command as illustrated by the sample below, again I normally run this as the owner of the process.</p>
<pre tabindex="0"><code>tomcat7@oneric:~$ jstack 12728
2011-10-16 14:53:58
Full thread dump OpenJDK 64-Bit Server VM (20.0-b11 mixed mode):

&#34;Attach Listener&#34; daemon prio=10 tid=0x00000000015be800 nid=0x4004 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

&#34;http-bio-8080-exec-182&#34; daemon prio=10 tid=0x00007f9d84274800 nid=0x3cd3 waiting on condition [0x00007f9d7a0df000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  &lt;0x00000000ef16da38&gt; (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
        at org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:104)
        at org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:32)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:679)
...
</code></pre><p>I plan on doing a bit of work on some visualisation tools, in <a href="http://jruby.org/">jruby</a> of course, however that can be the focus of my next post. In the process of writing this post I located some interesting articles, these are linked below:</p>
<ul>
<li><a href="http://java.sun.com/developer/technicalArticles/javase/troubleshoot/">Chuk-Munn Lee of Sun Microsystems Troubleshoots Java SE 6 Deployment</a></li>
<li><a href="http://www.freshblurbs.com/explaining-java-lang-outofmemoryerror-permgen-space">Explaining java.lang.OutOfMemoryError: PermGen space</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>How I use chef-solo with ubuntu 12.04</title>
      <link>https://www.wolfe.id.au/2011/09/10/how-i-use-chef-solo-with-ubuntu-12.04/</link>
      <pubDate>Sat, 10 Sep 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/09/10/how-i-use-chef-solo-with-ubuntu-12.04/</guid>
      <description>&lt;p&gt;Having recently started working with &lt;a href=&#34;http://www.opscode.com/chef/&#34;&gt;chef&lt;/a&gt; I have come up with a simple method of kickstarting use of it in the small
end of town. This really is designed for those of use managing a large number of very similar servers, or a small number
of simple servers.&lt;/p&gt;
&lt;p&gt;After watching this &lt;a href=&#34;http://www.youtube.com/watch?v=he7vxhm6v64&amp;amp;feature=youtu.be&#34;&gt;great presentation&lt;/a&gt; on chef by &lt;a href=&#34;https://twitter.com/benr&#34;&gt;@benr&lt;/a&gt; from &lt;a href=&#34;http://joyent.com/&#34;&gt;joyent&lt;/a&gt;, I decided it was time to roll up my sleeves and
get started with chef. To do this I took some of his advice and my meager Unix knowledge and crafted a simple bootstrap method
for my development &lt;a href=&#34;http://ubuntu.com&#34;&gt;Ubuntu&lt;/a&gt; systems, which I will describe in this post.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Having recently started working with <a href="http://www.opscode.com/chef/">chef</a> I have come up with a simple method of kickstarting use of it in the small
end of town. This really is designed for those of use managing a large number of very similar servers, or a small number
of simple servers.</p>
<p>After watching this <a href="http://www.youtube.com/watch?v=he7vxhm6v64&amp;feature=youtu.be">great presentation</a> on chef by <a href="https://twitter.com/benr">@benr</a> from <a href="http://joyent.com/">joyent</a>, I decided it was time to roll up my sleeves and
get started with chef. To do this I took some of his advice and my meager Unix knowledge and crafted a simple bootstrap method
for my development <a href="http://ubuntu.com">Ubuntu</a> systems, which I will describe in this post.</p>
<p>Firstly I get myself an ubuntu development system which I use to build my develop the profile I will use for this system
in the future. If this is done locally I grab the ubuntu install CD for the server version and follow the default
installation options with the only service I install being openssh. If your using a VPS this is typically what you get out
of the box.</p>
<p>Because I am doing this on a local server I normally use <code>ssh-copy-id</code> to copy over my ssh public key the easy mode way.</p>
<p>ssh-copy-id admin@ubuntuserver</p>
<p>Next I run my <a href="https://gist.github.com/3328844">bootstrap</a> script to install my environment on the server, note the link below
is retrieving a specific revision based on the raw link in the gist this may change based on my updates.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh admin@ubuntuserver -t -C <span class="s1">&#39;curl https://raw.github.com/gist/3328844/2f4d74d49f8f7a2cd0b7a83a23fafe75d21241cf/gistfile1.sh | sudo bash&#39;</span></span></span></code></pre></div>
<p>In the next few steps I export my <code>chef-solo</code> template project to the system and build up a recipe for producing this
type of system. You may do this differently, I have a script with all these commands in it but I have exploded it
for this example.</p>
<p>I start my dev cycle by generating a new ssh key pair and upload that to <a href="https://github.com">github</a> or <a href="https://bitbucket.org/">bitbucket</a> depending which one you use.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh admin@ubuntuserver -t -C <span class="s2">&#34;ssh-keygen -t rsa -b 4096 &amp;&amp; cat ~/.ssh/id_rsa.pub&#34;</span></span></span></code></pre></div>
<p>Make the chef directory and chown it for my admin user.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh admin@ubuntuserver -t -C <span class="s2">&#34;sudo su - -c &#39;(mkdir /var/chef &amp;&amp; chown admin:admin /var/chef)&#39;&#34;</span></span></span></code></pre></div>
<p>Clone my <code>git</code> project whilst retaining ownership of the files by my admin user, for those new to git see the awesome <a href="http://git-scm.com/book">git book</a>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh admin@ubuntuserver -t -C <span class="s2">&#34;git clone git@github.com:wolfeidau/chef-solo-base.git /var/chef&#34;</span></span></span></code></pre></div>
<p>Initialise the sub-modules and update them, this is something I always forget unless I have a script to follow..</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh admin@ubuntuserver -t -C <span class="s2">&#34;cd /var/chef &amp;&amp; git submodule init &amp;&amp; git submodule update&#34;</span></span></span></code></pre></div>
<p>Now you can run <code>chef-solo</code> just to ensure it is all running as expected.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh admin@ubuntuserver -t -C <span class="s1">&#39;sudo chef-solo -c /var/chef/solo.rb -j /var/chef/node.json&#39;</span></span></span></code></pre></div>
<p>This template is comprised of:</p>
<ul>
<li>solo.rb - Glue code which loads various directories.</li>
<li>node.json - This is the dna for your system and will be available to your recipes as meta.</li>
<li>cookbooks - This directory holds the cookbooks.
** main - This cookbook is where my default recipe is located and all its associated templates.
*** templates - This holds all my templates which I use to craft new or replacement configuration files.
** openssl - This is my first external cookbook, once I work out what I am doing I tend to externalise a few functions using other peoples cookbooks.</li>
</ul>
<p>Using the handy <code>tree</code> command we can see the overall structure of the template.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">markw@chefdev1204:/var/chef$ tree
</span></span><span class="line"><span class="cl">.
</span></span><span class="line"><span class="cl">├── cookbooks
</span></span><span class="line"><span class="cl">│   ├── main
</span></span><span class="line"><span class="cl">│   │   ├── recipes
</span></span><span class="line"><span class="cl">│   │   │   └── default.rb
</span></span><span class="line"><span class="cl">│   │   └── templates
</span></span><span class="line"><span class="cl">│   │       └── default
</span></span><span class="line"><span class="cl">│   │           ├── bambooxml.erb
</span></span><span class="line"><span class="cl">│   │           ├── pg_hba_conf.erb
</span></span><span class="line"><span class="cl">│   │           ├── screenrc.erb
</span></span><span class="line"><span class="cl">│   │           ├── serverxml.erb
</span></span><span class="line"><span class="cl">│   │           ├── tomcat7.erb
</span></span><span class="line"><span class="cl">│   │           └── zshrc.erb
</span></span><span class="line"><span class="cl">│   └── openssl
</span></span><span class="line"><span class="cl">│       ├── CHANGELOG.md
</span></span><span class="line"><span class="cl">│       ├── CONTRIBUTING
</span></span><span class="line"><span class="cl">│       ├── libraries
</span></span><span class="line"><span class="cl">│       │   └── secure_password.rb
</span></span><span class="line"><span class="cl">│       ├── LICENSE
</span></span><span class="line"><span class="cl">│       ├── metadata.rb
</span></span><span class="line"><span class="cl">│       ├── README.md
</span></span><span class="line"><span class="cl">│       └── recipes
</span></span><span class="line"><span class="cl">│           └── default.rb
</span></span><span class="line"><span class="cl">├── node.json
</span></span><span class="line"><span class="cl">├── README.md
</span></span><span class="line"><span class="cl">└── solo.rb
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="m">8</span> directories, <span class="m">17</span> files</span></span></code></pre></div>
<p>In my example I have commented out a whole section of example code which bootstraps my CI server using chef, this is
gives me some starting points. Note that I am not an authority on either chef or ruby so my scrappy sysadmin code may
make some people grimace but it is a starting point.</p>
<p>Once you&rsquo;re ready to start hacking remove the remote repo and add your own.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">git remote rm origin</span></span></code></pre></div>
<p>Next some rules I try and live by when using chef:</p>
<ol>
<li>Keep it simple, if a recipe looks complicated or you don&rsquo;t understand it don&rsquo;t use it.</li>
<li>Don&rsquo;t use chef as an alternate package manager, make packages using fpm and install them.</li>
<li>Please reread #1.</li>
</ol>
<p>Most importantly get started with this tool you will never look back once you have a few systems built using it.</p>
<p>To add more recipes from <a href="https://github.com/opscode-cookbooks">opscode-cookbooks</a> account simply navigate to the base of your project and run something like the
following example.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">git submodule add https://github.com/opscode-cookbooks/openssl.git cookbooks/openssl</span></span></code></pre></div>
<p>Once I have completed my chef project I check it out on my workstation and use <code>rsync</code> to push it to a clean target host for testing.</p>
<p>Again I run <code>ssh-copy-id</code> to copy my ssh key.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh-copy-id admin@ubuntuserver</span></span></code></pre></div>
<p>Bootstrap chef onto the server.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh admin@ubuntuserver -t -C <span class="s1">&#39;curl https://raw.github.com/gist/3328844/2f4d74d49f8f7a2cd0b7a83a23fafe75d21241cf/gistfile1.sh | sudo bash&#39;</span></span></span></code></pre></div>
<p>Create my chef directory.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">ssh admin@ubuntuserver -t -C <span class="s2">&#34;sudo su - -c &#39;(mkdir /var/chef &amp;&amp; chown admin:admin /var/chef)&#39;&#34;</span></span></span></code></pre></div>
<p>Using rsync and a locally checked out repo copy the files to the remote machine.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rsync -axvr -e ssh my-chef-project/ admin@ubuntuserver:/var/chef/</span></span></code></pre></div>
]]></content:encoded>
    </item>
    
    <item>
      <title>JIRA and the not so great new installer</title>
      <link>https://www.wolfe.id.au/2011/09/04/jira-and-the-not-so-great-new-installer/</link>
      <pubDate>Sun, 04 Sep 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/09/04/jira-and-the-not-so-great-new-installer/</guid>
      <description>&lt;p&gt;The following post is a review of the new installer added in 4.4 of Atlassian JIRA. It details my experience with this installer and provides some advice on how to improve it.&lt;/p&gt;
&lt;p&gt;Since it&amp;rsquo;s recent release I have seen quite a few posts praising the new JIRA installer so I decided to give it a try at work. First thing that struck me when I went to download it was the linux version was a single file, no deb or RPM(s).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The following post is a review of the new installer added in 4.4 of Atlassian JIRA. It details my experience with this installer and provides some advice on how to improve it.</p>
<p>Since it&rsquo;s recent release I have seen quite a few posts praising the new JIRA installer so I decided to give it a try at work. First thing that struck me when I went to download it was the linux version was a single file, no deb or RPM(s).</p>
<p>I copied the file to the Linux server and logged in as the jira user I wanted to run the service under, and ran the installer. The first thing the installer did was inform me I wasn&rsquo;t running as an administrator, and as such it wouldn&rsquo;t install a startup file for the JIRA. As this was a VM I took a snapshot and decided to try running it as root. When I did it asked me a couple of questions about where I would to put parts of the installation and then went off and did it&rsquo;s thing. Once it completed I examined what it had done, these were the results:</p>
<ol>
<li>Created a jira1 user.</li>
<li>Installed a startup script in <em>/etc/init.d</em>.</li>
<li>Installed a version of tomcat.</li>
<li>Installed a version of Java.</li>
</ol>
<p>The first point is quite amusing as I already had a user called jira on the system and rather than ask me whether it should use it, it went and created jira1. Now most installers I have run recently at least had the decency of telling me before adding a user to my system, especially when they discover their preferred user id is currently in use. I also noted rather than set the user&rsquo;s home directory to the location of the services working data it had just used the default add user leaving a pretty much unused home directory in <em>/home</em>. Overall user creation and use could be much better.</p>
<p>The second point was fine until I opened the script, when I did I was astounded to find a very brief script as follows:</p>
<pre tabindex="0"><code>#!/bin/bash

# JIRA Linux service controller script
cd &#34;/opt/atlassian/jira/bin&#34;

case &#34;$1&#34; in
    start)
        ./start-jira.sh
        ;;
    stop)
        ./stop-jira.sh
        ;;
    *)
        echo &#34;Usage: $0 {start|stop}&#34;
        exit 1
        ;;
esac
</code></pre><p>Now there are a numerous issues with this init script, firstly changing directory within a startup script and then running a script in the current working directory is a big no no, not only is it a potential security hole waiting to happen, it is also just plain bad form. In the case of tomcat itself there are a couple of system variables which instruct it where it&rsquo;s base files are and where it&rsquo;s working data is located. Using these variables removes the need for change directory (CD) in scripts and is much safer.</p>
<p>This script also not using any of the nice shell functions present in most linux distributions, for instances the linux standard base (LSB) functions which identify the distribution of Linux this script is running on, or the daemon functions which help run your service. Even worse is the fact this init script also calls another script which changes user context, then calls another script which starts the JVM. The tragedy of this multilayered shell abomination is further compounded by the fact the last script in the chain is just the default tomcat <em>catalina.sh</em> script.</p>
<p>The <em>catalina.sh</em> this script is a good script, however it is quite a generalised script primarily designed for use during development. All that is really required to start tomcat correctly is a small piece at the end of this script with paths configured based on the installation, along with some tuning for the application and the user it is to run under. As previously mentioned this is all typically available in shell functions within the init system.</p>
<p>The third point is a one of the ones I really have an issue with, the installer has put an unknown version of tomcat onto my system. As tomcat is a web server it is unfortunately targeted for exploitation, and does have the odd security issue. This would be ok if you ensured I had a method of updating said tomcat but alas you just dump it in your location of choice and leave me holding the baby. Note upon searching I noted tomcat bundled with JIRA, which is version 6.0.32, does indeed have a few <a href="http://tomcat.apache.org/security-6.html">security advisories</a>, not a good start.</p>
<p>And lastly this installer has gone and dumped another copy of java on my system, again with no information on what version this is, and again no way of upgrading it if there is any security exploit for it. So overall not very impressed at all with this new installer.</p>
<p>Overall I think what astounds me most is that a company of this size is completely ignoring recognised best practices when packing software for Linux. Considering the customers who know least about linux are most likely to use this installer blissfully unaware of the traps it has dragged them into I am very disappointed.</p>
<p>Now as I am continually reminded by my boss, if I am going to take the time to present issues I should always accompany it with some suggested solutions to them. So my suggestions are as follows:</p>
<ol>
<li>Work out what linux distributions your customers are running.</li>
<li>Review how each of these distributions package their software, in general be a good citizen in these operating systems.</li>
<li>Set up a couple of repositories for packages tuned for ease of deployment on the main couple of distributions. Essentially make it as easy as possible for a novice user to deploy and <em>update</em> your products using these distribution points.</li>
<li>Write a nice init script following the best practices promoted by each distribution.</li>
<li>Break your system into it&rsquo;s distinct components, in a similar way to how you modularise your software, pull the system down to a few discreet packages, ideally with upstream version numbers.</li>
<li>Offer updates to these packages to ensure your customers data is secure.</li>
</ol>
<p>One of things I respect about Microsoft and Apple is they have a <em>keen</em> focus on ensuring people who deploy their products look like wizards, a few clicks and everything is up and running. In my opinion the user experience for the person installing and maintaining this software is as important as the end user. In a lot of cases this person has the potential to be one of Atlassian&rsquo;s greatest assets, so please don&rsquo;t mess them around.</p>
<p>As Atlassian has had some quite high profile compromises, for example the <a href="https://blogs.apache.org/infra/entry/apache_org_04_09_2010">Apache foundations JIRA</a> incident I would have assumed that providing a secure method to install and operate their software would be at the top of their priority list.</p>
<p>Lastly I am always happy to sit down and have a chat with anyone from Atlassian, preferably over a cold beer. I have been using their products for 8 or 9 years, yes I even had a support request responded to by one of the founders. Atlassian&rsquo;s products have served me and my customers well so this is the least I could do.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Road Testing Gradle</title>
      <link>https://www.wolfe.id.au/2011/08/17/road-testing-gradle/</link>
      <pubDate>Wed, 17 Aug 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/08/17/road-testing-gradle/</guid>
      <description>&lt;p&gt;For a while I have been tempted to stray from the relative safety of &lt;a href=&#34;http://maven.apache.org&#34;&gt;Maven&lt;/a&gt; and &lt;a href=&#34;http://ant.apache.org&#34;&gt;Ant&lt;/a&gt;. Every time I fight with these tools, or in the case of maven am stunned and amazed out how simple things can require so much xml, I look over the alternatives wondering if it could be easier.&lt;/p&gt;
&lt;p&gt;After quite a bitter fight with ant at work I decided it was time to give one of thes new contenders a try. As I had recently tinkered on a project which used &lt;a href=&#34;http://gradle.org/&#34;&gt;Gradle&lt;/a&gt; and found it quite easy to use it seemed like as good a candidate as any.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For a while I have been tempted to stray from the relative safety of <a href="http://maven.apache.org">Maven</a> and <a href="http://ant.apache.org">Ant</a>. Every time I fight with these tools, or in the case of maven am stunned and amazed out how simple things can require so much xml, I look over the alternatives wondering if it could be easier.</p>
<p>After quite a bitter fight with ant at work I decided it was time to give one of thes new contenders a try. As I had recently tinkered on a project which used <a href="http://gradle.org/">Gradle</a> and found it quite easy to use it seemed like as good a candidate as any.</p>
<p>My road test involved building a multi module project with a sample shared common library, a server library and a REST service. The results of my experiment are located in <a href="https://github.com/wolfeidau/agent-manager">agent-manager</a> up on github. Overall I found building this project quite enjoyable and using code, albeit groovy which I am not that familiar with, quite refreshing.</p>
<p>After a bit of consideration I would list the pros as follows:</p>
<ul>
<li>Good overall documentation</li>
<li>Dependency management was a breeze</li>
<li>Great support for generating IDEA projects, and some great options for customising them.</li>
<li>Nice terse syntax with the ability to follow the Don&rsquo;t Repeat Yourself Mantra(DRY).</li>
<li>Very flexible build and test configuration, in my example I added a new integration test scope which was completely isolated from the existing unit tests.</li>
<li>Very little code required to get a nice project build running.</li>
</ul>
<p>The cons are as follows:</p>
<ul>
<li>Not a lot of plugins, in my case none of the major web service runtimes have a gradle plugin. To be honest I didn&rsquo;t find a lot outside of this projects default ones.</li>
<li>Some of the syntax errors can be cryptic, especially when you are missing a parameter to a closure.</li>
<li>Not a lot of good examples.</li>
</ul>
<p>So overall quite an interesting exercise, in my case I will keep at it as the cons were well and truly outweighed by the pros. I would be happy to recommend Gradle to anyone who is looking for something new, especially on simple java projects.</p>
<p>If your interested in learning more then have a look over my sample and make sure you checkout the following projects for inspiration.</p>
<ul>
<li><a href="https://github.com/hibernate/hibernate-core">Hibernate Core</a></li>
<li><a href="https://github.com/SpringSource/spring-integration">Spring Integration</a></li>
<li><a href="http://git.codehaus.org/gitweb.cgi?p=groovy-git.git;a=tree;h=refs/heads/trunk;hb=trunk">Groovy</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Hacking rails on Ubuntu with rvm</title>
      <link>https://www.wolfe.id.au/2011/07/24/hacking-rails-on-ubuntu-with-rvm/</link>
      <pubDate>Sun, 24 Jul 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/07/24/hacking-rails-on-ubuntu-with-rvm/</guid>
      <description>&lt;p&gt;Over the last few days I have been familiarising myself with some of the rails source code and surveying it for use in my own projects. In doing so I noticed there were quite a few gotchas getting &lt;a href=&#34;http://www.ubuntu.com/&#34;&gt;Ubuntu&lt;/a&gt; and ruby set up &lt;a href=&#34;http://rvm.beginrescueend.com/&#34;&gt;RVM&lt;/a&gt; to successfully run the tests suites in rails. This post aims to provide a step by step guide to getting a clean ubuntu installation ready to test and hack on rails, note I am using Ubuntu 11.04 and this process has been tested on server and desktop.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the last few days I have been familiarising myself with some of the rails source code and surveying it for use in my own projects. In doing so I noticed there were quite a few gotchas getting <a href="http://www.ubuntu.com/">Ubuntu</a> and ruby set up <a href="http://rvm.beginrescueend.com/">RVM</a> to successfully run the tests suites in rails. This post aims to provide a step by step guide to getting a clean ubuntu installation ready to test and hack on rails, note I am using Ubuntu 11.04 and this process has been tested on server and desktop.</p>
<p>Install the development suite for Ubuntu.</p>
<pre tabindex="0"><code>sudo apt-get install build-essential
</code></pre><p>Install the Git version control package and curl http client utility</p>
<pre tabindex="0"><code>sudo apt-get install git curl
</code></pre><p>Install the development packages which ruby and it&rsquo;s utilities depend on.</p>
<pre tabindex="0"><code>sudo apt-get install zlib1g-dev libssl-dev libreadline-dev
</code></pre><p>Install the packages required to build nokogiri, which is an xml library used by rails.</p>
<pre tabindex="0"><code>sudo apt-get install libxml2-dev libxslt1-dev
</code></pre><p>Install the command line SQLLite and development package, note the command line tools aren&rsquo;t required but are very handy.</p>
<pre tabindex="0"><code>sudo apt-get install sqlite3 libsqlite3-dev
</code></pre><p>Install MySQL client, server and development packages.</p>
<pre tabindex="0"><code>sudo apt-get install mysql-client mysql-server libmysqlclient15-dev
</code></pre><p>Install the PostgeSQL client, server and development packages.</p>
<pre tabindex="0"><code>sudo apt-get install postgresql postgresql-client postgresql-contrib libpq-dev
</code></pre><p>Run the following command to install RVM</p>
<pre tabindex="0"><code>bash &lt; &lt;(curl -s https://rvm.beginrescueend.com/install/rvm)
</code></pre><p>Source the .bashrc to make put the RVM command(s) in your path</p>
<pre tabindex="0"><code>source .bashrc
</code></pre><p>Install ruby 1.9.2, this compiles the runtime and then installs it.</p>
<pre tabindex="0"><code>rvm install 1.9.2
</code></pre><p>Enable the ruby 1.9.2 runtime as the default.</p>
<pre tabindex="0"><code>rvm use 1.9.2 --default
</code></pre><p>Update your gem command.</p>
<pre tabindex="0"><code>gem update --system
</code></pre><p>Log into the MySQL command line interface using the password which was set for the root user at installation.</p>
<pre tabindex="0"><code>mysql -uroot -p
</code></pre><p>Set up a rails user with all permissions on the activerecord test database.</p>
<pre tabindex="0"><code>GRANT ALL PRIVILEGES ON activerecord_unittest.* to &#39;rails&#39;@&#39;localhost&#39;;
GRANT ALL PRIVILEGES ON activerecord_unittest2.* to &#39;rails&#39;@&#39;localhost&#39;;
</code></pre><p>Create a user with super user privileges in PostgreSQL for the login your currently authenticated under.</p>
<pre tabindex="0"><code>sudo -u postgres createuser --superuser $USER
</code></pre><p>Navigate to where you store the code your working on and clone the rails repository.</p>
<pre tabindex="0"><code>cd ~/Code/Ruby
git clone git://github.com/rails/rails.git
cd rails
</code></pre><p>Install the latest version of bundler.</p>
<pre tabindex="0"><code>gem install bundler
</code></pre><p>Install all dependencies including the MySQL and PostgreSQL drivers.</p>
<pre tabindex="0"><code>bundle install
</code></pre><p>Run the tests.</p>
<pre tabindex="0"><code>rake test
</code></pre><p>For more information on this process see <a href="http://guides.rubyonrails.org/contributing_to_ruby_on_rails.html">Contributing to Ruby on Rails</a>.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>GHTest Getting Started</title>
      <link>https://www.wolfe.id.au/2011/06/14/ghtest-getting-started/</link>
      <pubDate>Tue, 14 Jun 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/06/14/ghtest-getting-started/</guid>
      <description>&lt;p&gt;Recently I have caught the OS X software development bug and started hacking on a small project in this environment. As I am normally pretty keen to try out new APIs and indeed a whole new language, I was keen to get some unit testing going as early as possible. After a bit of research i discovered the preferred testing solution for this environment was &lt;a href=&#34;https://github.com/gabriel/gh-unit&#34;&gt;GHUnit&lt;/a&gt;, so I downloaded this &amp;ldquo;framework&amp;rdquo; and installed it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I have caught the OS X software development bug and started hacking on a small project in this environment. As I am normally pretty keen to try out new APIs and indeed a whole new language, I was keen to get some unit testing going as early as possible. After a bit of research i discovered the preferred testing solution for this environment was <a href="https://github.com/gabriel/gh-unit">GHUnit</a>, so I downloaded this &ldquo;framework&rdquo; and installed it.</p>
<p>One of the first things that struck me was how dependant I was on xcode to get this up and running, and how damn confusing this was for the uninitiated. The myriad of windows you end up with in xcode 3.2.x was sending me crazy, this paired with the fact I a certain window was my project led me to upgrade to xcode 4.x.</p>
<p>So without further a do I have gathered together some screenshots on how I got my tests up and running once I had Xcode 4 installed.</p>
<p><em>Step 1:</em> Create a project, in my case I am focused on Mac OS X desktop applications so I select Application from the options.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-000.png"/> <figcaption>
            New Project Dialogue
        </figcaption>
</figure>

<p><em>Step 2:</em> Enter a project name and company, then disable unit tests as this will include OCUnit which I am not currently using. Save it where you keep your projects.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-001.png"/> <figcaption>
            Project Details Dialogue
        </figcaption>
</figure>

<p><em>Step 3:</em> Now add a new target, you should have a button down the bottom to <em>Add Target</em>, again click Mac OS X and select Application from the options and enter the details.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-005.png"/> <figcaption>
            New Target Details Dialogue
        </figcaption>
</figure>

<p><em>Step 4:</em> Delete all the files which we don&rsquo;t need.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-006.png"/> <figcaption>
            New Target Details Dialogue
        </figcaption>
</figure>

<p><em>Step 5:</em> Remove the main nib file from the <code>TestAppTests-Info.plist</code>.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-007.png"/> <figcaption>
            Editing TestAppTests plist
        </figcaption>
</figure>

<p><em>Step 6:</em> Download <a href="https://raw.github.com/gabriel/gh-unit/8cdb46819dbed8a6a7ee6566cfd7ee2524f303dd/Classes-MacOSX/GHUnitTestMain.m">GHUnitTestMain.m</a> from git hub and then drag and drop it into the <code>TestAppTests</code> folder in your xcode. Then add it to the TestAppTests target.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-008.png"/> <figcaption>
            Select a target to add the file GHUnitTestMain.m to
        </figcaption>
</figure>

<p><em>Step 7:</em> Download the <a href="https://github.com/downloads/gabriel/gh-unit/GHUnit-0.4.28.zip">GHUnit framework</a> extract it, then copy it to your <code>/Library/Frameworks</code>. Then select the TestAppTests target and click the plus button. Search for the GHUnit.framework and add it.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-009.png"/> <figcaption>
            Add the GHUnit.framework to your project
        </figcaption>
</figure>

<p><em>Step 8:</em> Create a new test by creating a new empty file named <code>SampleTest.m</code> in the TestAppTests folder in Xcode and add the following code. Also ensure it is added to the TestAppTests target.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-objectivec" data-lang="objectivec"><span class="line"><span class="cl"><span class="c1">//
</span></span></span><span class="line"><span class="cl"><span class="c1">//  SampleTest.m
</span></span></span><span class="line"><span class="cl"><span class="c1">//  TestApp
</span></span></span><span class="line"><span class="cl"><span class="c1">//
</span></span></span><span class="line"><span class="cl"><span class="c1">//  Created by Mark Wolfe on 14/06/11.
</span></span></span><span class="line"><span class="cl"><span class="c1">//  Copyright 2011 N/A. All rights reserved.
</span></span></span><span class="line"><span class="cl"><span class="c1">//
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>
</span></span><span class="line"><span class="cl"><span class="cp">#import &lt;Foundation/Foundation.h&gt;
</span></span></span><span class="line"><span class="cl"><span class="cp">#import &lt;GHUnit/GHUnit.h&gt;
</span></span></span><span class="line"><span class="cl"><span class="cp"></span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="k">@interface</span> <span class="nc">SampleTest</span> : <span class="nc">GHTestCase</span> <span class="p">{}</span>
</span></span><span class="line"><span class="cl"><span class="k">@end</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="k">@implementation</span> <span class="nc">SampleTest</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="p">-</span> <span class="p">(</span><span class="kt">void</span><span class="p">)</span><span class="nf">testSomeStringsEqualEachOther</span><span class="p">{</span>
</span></span><span class="line"><span class="cl">    
</span></span><span class="line"><span class="cl">    <span class="n">NSString</span> <span class="o">*</span><span class="n">someString</span> <span class="o">=</span> <span class="s">@&#34;Have me some objective c testing&#34;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">    
</span></span><span class="line"><span class="cl">    <span class="n">GHAssertEquals</span><span class="p">(</span><span class="s">@&#34;Have me some objective c testing&#34;</span><span class="p">,</span> <span class="n">someString</span><span class="p">,</span> <span class="s">@&#34;The strings should match bro&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="k">@end</span></span></span></code></pre></div>
<p>Your project should look as follows.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-014.png"/> <figcaption>
            Unit test in your project
        </figcaption>
</figure>

<p><em>Step 9:</em> Now set the active scheme to TestAppTests and click the play button, all going well you should see the following application pop up with your test listed in it ready to run.</p>
<figure>
    <img loading="lazy" src="/images/GHTestWalkthrough-screen-015.png"/> <figcaption>
            Unit test application displaying tests to run
        </figcaption>
</figure>

<p>Now that is all setup you should be able to add tests to your hearts content. So far GHUnit has enabled me to test a wide array of routines and frameworks which my application will rely upon.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>RHOK Survival Guide</title>
      <link>https://www.wolfe.id.au/2011/06/07/rhok-survival-guide/</link>
      <pubDate>Tue, 07 Jun 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/06/07/rhok-survival-guide/</guid>
      <description>&lt;p&gt;Last weekend I was fortunate to have participated in first ever &lt;a href=&#34;http://www.rhok.org/&#34;&gt;random hacks of kindness&lt;/a&gt; event. This was quite an amazing experience and one I really enjoyed, however going into this event I had no idea what to expect, and no real insight into what to bring. So for the benefit of those looking to attend these awesome events in the future, firstly read &lt;a href=&#34;http://www.rhok.org/about&#34;&gt;About RHOK&lt;/a&gt;, secondly I compiled a list of my observations and tips.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Last weekend I was fortunate to have participated in first ever <a href="http://www.rhok.org/">random hacks of kindness</a> event. This was quite an amazing experience and one I really enjoyed, however going into this event I had no idea what to expect, and no real insight into what to bring. So for the benefit of those looking to attend these awesome events in the future, firstly read <a href="http://www.rhok.org/about">About RHOK</a>, secondly I compiled a list of my observations and tips.</p>
<p>Prior to attending one of these events there are things you should bring:</p>
<ol>
<li>A laptop set up with a range of tools installed, depending on your skill set and development stack.</li>
<li>A pad pen and some butchers paper for big diagrams.</li>
<li>If your a developer make sure you have played around with at least one online collaboration tool. I myself use <a href="http://github.com">github</a> for source control and Wiki.</li>
<li>Make sure you have tested your laptop with an external display or projector, you will need to demo the stuff you build!</li>
<li>Ensure you have a copy of either Power Point or Keynote installed so you can build a presentation to pitch your ideas.</li>
</ol>
<p>Having listened to the summary around each of the proposed problems and a bit of background on each we were instructed to divide ourselves into groups and begin work on each of the problems. This is an interesting time for each of the groups, with some people having never met before. Under the guidance of a mentor or problem owner the initial discussions begin.</p>
<p>For those new to this sort of situation I can offer the following tips:</p>
<ol>
<li>Ask a few simple questions to break the silence and help trigger some discussion around the problem.</li>
<li>Do a role call and ask what role each member can fill whether it be designer, front end developer, backend developer or business analyst.</li>
<li>For those in the development arena see what stack they are comfortable developing in, whether it is Java, PHP, Rails or .NET. In this sort of situation look for ways to fit people in, a little bit of give and take can go a long way.</li>
<li>Bootstrap the project with some version control tools and a wiki before anyone gets to carried away. It is crucial all the assets created are out in the public domain for those that decide to continue hacking on the project. For me this means either setup a github repo or a <a href="http://code.google.com/hosting/">google code</a> project.</li>
<li>Lastly if you see anyone struggling to fit in go and have a chat to the other teams and see if you can find a place for them in another project.</li>
</ol>
<p>Another interesting facet of the day was that each group was at different stages of development. one of the groups had already been established in a previous event and was looking to add / develop on the existing solution. While others were new problems with some prepared specifications and background documentation. And lastly the one I chose which was a problem which was completely new and had only preliminary investigation done. The only thing I can recommend here is do a bit of research around each of the problems presented for that RHOK event and choose something that interests you, then go with your gut on how to proceed. I our case we just had a good round table discussion about the problem then listened to some great insight offered by people who had experience in the problem area.</p>
<p>So what can I offer when trying to get started like me in a completely greenfield project:</p>
<ul>
<li>Google a lot to try and find ideas</li>
<li>Talk to the event organisers and see what strings they can pull or who they can pull in to help, in my case this was VERY helpful.</li>
<li>Start brain storming ideas as early as possible</li>
<li>Draw a few concepts on paper</li>
<li>Discuss where to get the data for these ideas and build a bit of a schema or structure.</li>
<li>Get a mock up started as soon as possible</li>
<li>Keep a note of any ideas and problems for later, this is very important as you will have to assemble some sort of presentation at the end of the event.</li>
</ul>
<p>Lastly have fun and try new cool stuff, this will add to the pressure but also give a chance to test yourself.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Working with the html5boilerplate project</title>
      <link>https://www.wolfe.id.au/2011/05/29/working-with-the-html5boilerplate-project/</link>
      <pubDate>Sun, 29 May 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/05/29/working-with-the-html5boilerplate-project/</guid>
      <description>&lt;p&gt;Recently I started work on a new site which I plan to use &lt;a href=&#34;http://documentcloud.github.com/underscore/&#34;&gt;underscore.js&lt;/a&gt; and &lt;a href=&#34;http://documentcloud.github.com/backbone/&#34;&gt;backbone.js&lt;/a&gt;. Starting a site from scratch can be quite a chore, as well as time consuming. To remedy this I decided to take the plunge and give the &lt;a href=&#34;http://html5boilerplate.com/&#34;&gt;html5boilerplate&lt;/a&gt; project a try, this project is designed to kickstart your &lt;a href=&#34;http://developers.whatwg.org/&#34;&gt;html5&lt;/a&gt; site development with a shell containing all of the stuff that you need to begin with.&lt;/p&gt;
&lt;p&gt;So why start with html5boilerplate at all? Well the following things come with it:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I started work on a new site which I plan to use <a href="http://documentcloud.github.com/underscore/">underscore.js</a> and <a href="http://documentcloud.github.com/backbone/">backbone.js</a>. Starting a site from scratch can be quite a chore, as well as time consuming. To remedy this I decided to take the plunge and give the <a href="http://html5boilerplate.com/">html5boilerplate</a> project a try, this project is designed to kickstart your <a href="http://developers.whatwg.org/">html5</a> site development with a shell containing all of the stuff that you need to begin with.</p>
<p>So why start with html5boilerplate at all? Well the following things come with it:</p>
<ul>
<li>modenizer included and configured for legacy browser support.</li>
<li>A cool <code>favicon.ico</code> to eliminate those 404 errors I always get in my logs, as well as apple touch icons.</li>
<li>A base project structure with files originised into a nice layout.</li>
<li>JQuery bundled in and ready to roll.</li>
<li>A great starting point annotated css file with a whole raft of tips and tricks.</li>
<li>Sample site compression build scripts in <a href="http://ant.apache.org/">Apache Ant</a>.</li>
<li>A nice example of how to load the google analytics script asynchronously.</li>
</ul>
<p>Even for those maintaining an existing site there are quite a few very handy lessons to learn from this project. Personally I like all the server configuration file examples located at <a href="https://github.com/paulirish/html5-boilerplate-server-configs">html5-boilerplate-server-configs</a>, this is a very important facet of any site and is often overlooked.</p>
<p>So in summary I have learnt a lot of good lessons while working with this project and I will be merging some of these into my own blog site.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Using Apache CXF And Maven With a Proxy Server</title>
      <link>https://www.wolfe.id.au/2011/05/26/using-apache-cxf-and-maven-with-a-proxy-server/</link>
      <pubDate>Thu, 26 May 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/05/26/using-apache-cxf-and-maven-with-a-proxy-server/</guid>
      <description>&lt;p&gt;I discovered a couple of interesting issues when using &lt;a href=&#34;http://cxf.apache.org&#34;&gt;Apache CXF&lt;/a&gt; and &lt;a href=&#34;http://maven.apache.org&#34;&gt;Maven&lt;/a&gt; behind a proxy this week. It started when I sent out a package of stubs in a Maven project I had built to enable developers of integration systems to regenerate there own stubs from the live WSDL. This project uses the &lt;a href=&#34;http://cxf.apache.org/docs/wsdl-to-java.html&#34;&gt;wsdl2java&lt;/a&gt; tool from Apache CXF to generate some JAX-WS based SOAP stubs from the WSDL hosted on a staging server on the internet. When run on one of the developers sites it became apparent that the Maven &lt;a href=&#34;http://cxf.apache.org/docs/maven-cxf-codegen-plugin-wsdl-to-java.html&#34;&gt;cxf-codegen-plugin&lt;/a&gt; doesn&amp;rsquo;t pass through the already configured Maven proxy settings to wsdl2java, this was a bit annoying.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I discovered a couple of interesting issues when using <a href="http://cxf.apache.org">Apache CXF</a> and <a href="http://maven.apache.org">Maven</a> behind a proxy this week. It started when I sent out a package of stubs in a Maven project I had built to enable developers of integration systems to regenerate there own stubs from the live WSDL. This project uses the <a href="http://cxf.apache.org/docs/wsdl-to-java.html">wsdl2java</a> tool from Apache CXF to generate some JAX-WS based SOAP stubs from the WSDL hosted on a staging server on the internet. When run on one of the developers sites it became apparent that the Maven <a href="http://cxf.apache.org/docs/maven-cxf-codegen-plugin-wsdl-to-java.html">cxf-codegen-plugin</a> doesn&rsquo;t pass through the already configured Maven proxy settings to wsdl2java, this was a bit annoying.</p>
<p>So being a happy consumer of open source software, I had a browse over the sources to Apache CXF tools and discovered the method for retrieval of the WSDL files was using <code>java.net.URL</code>. To enable a proxy server for use by this class is as simple as passing some extra switches to the <em>mvn</em> command as in the following example.</p>
<pre tabindex="0"><code>$ mvn -Dhttp.proxyHost=proxy -Dhttp.proxyPort=8080 package
</code></pre><p>Once we had overcome this issue we hit another interesting hurdle. My integration tests in this Maven project were using the spring configuration method, these were also failing. Turns out we also needed to set the proxy in the Apache CXF configuration as well. This was done using a conduit as follows.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;http-conf:conduit</span> <span class="na">name=</span><span class="s">&#34;*.http-conduit&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;http-conf:client</span> <span class="na">ProxyServer=</span><span class="s">&#34;squid.wolfe.id.au&#34;</span> <span class="na">ProxyServerPort=</span><span class="s">&#34;3128&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/http-conf:conduit&gt;</span>
</span></span></code></pre></div><p>So in summary if your working behind a proxy server building web services projects using Maven and Apache CXF you will need to do the following.</p>
<p>Configure a proxy in your Maven configuration so that assets can be retrieved, this is done as follows in your <code>settings.xml</code>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl">     <span class="nt">&lt;proxy&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;id&gt;</span>optional<span class="nt">&lt;/id&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;active&gt;</span>true<span class="nt">&lt;/active&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;protocol&gt;</span>http<span class="nt">&lt;/protocol&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;host&gt;</span>squid.wolfe.id.au<span class="nt">&lt;/host&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;port&gt;</span>3128<span class="nt">&lt;/port&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;nonProxyHosts&gt;&lt;/nonProxyHosts&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;/proxy&gt;</span>
</span></span></code></pre></div><p>Whenever you invoke any tests or calls which invoke <code>wsdl2java</code> you will need to pass the proxy settings in as previously described.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ mvn -Dhttp.proxyHost<span class="o">=</span>proxy -Dhttp.proxyPort<span class="o">=</span><span class="m">8080</span> package
</span></span></code></pre></div><p>When running any tests or routines which use Apache CXF driven web services you will need the conduit configured, in this example it is global and applies to all http connections.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;http-conf:conduit</span> <span class="na">name=</span><span class="s">&#34;*.http-conduit&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;http-conf:client</span> <span class="na">ProxyServer=</span><span class="s">&#34;squid.wolfe.id.au&#34;</span> <span class="na">ProxyServerPort=</span><span class="s">&#34;3128&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/http-conf:conduit&gt;</span>
</span></span></code></pre></div><p>So there is no confusion about which libraries I am they are:</p>
<ul>
<li>Maven 2.2.1</li>
<li>Apache CXF 2.4.0</li>
<li>Sun\Oracle Java 1.6_24</li>
</ul>
<p>So overall a frustrating day, but I won in the end, now all i need to do is incorporate all this information in my project so the onsite developer can work. Hopefully this post helps someone people that run into the same issues.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>How to build server with Bamboo and Ubuntu</title>
      <link>https://www.wolfe.id.au/2011/05/24/how-to-build-server-with-bamboo-and-ubuntu/</link>
      <pubDate>Tue, 24 May 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/05/24/how-to-build-server-with-bamboo-and-ubuntu/</guid>
      <description>&lt;p&gt;Recently I have been looking into setting up my own build server, having used Cruisecontrol, Hudson, Teamcity in the passed I wanted to give something new a try. With the recent release of &lt;a href=&#34;http://www.atlassian.com/software/bamboo/&#34;&gt;Bamboo&lt;/a&gt; 3.1.1 I thought I would see what all the fuss is about. I logged onto &lt;a href=&#34;http://www.atlassian.com/&#34;&gt;Atlassian&amp;rsquo;s site&lt;/a&gt; and payed my 10 bucks, which much to my delight and respect, goes to charity.&lt;/p&gt;
&lt;p&gt;I then spun up a clean &lt;a href=&#34;http://www.ubuntu.com/&#34;&gt;Ubuntu&lt;/a&gt; server running 10.04.02, yes old trusty Long Term Support release, and ran through the following steps.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I have been looking into setting up my own build server, having used Cruisecontrol, Hudson, Teamcity in the passed I wanted to give something new a try. With the recent release of <a href="http://www.atlassian.com/software/bamboo/">Bamboo</a> 3.1.1 I thought I would see what all the fuss is about. I logged onto <a href="http://www.atlassian.com/">Atlassian&rsquo;s site</a> and payed my 10 bucks, which much to my delight and respect, goes to charity.</p>
<p>I then spun up a clean <a href="http://www.ubuntu.com/">Ubuntu</a> server running 10.04.02, yes old trusty Long Term Support release, and ran through the following steps.</p>
<p>First install the Sun Java development kit, I use this in preference over OpenJDK at the moment for QA as it is what I normally deploy to for customers. To install this package we need to modify the <code>/etc/apt/sources.list</code> and remove the comments at the start of the lines which include the partner repositories. These look as follows</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">deb http://archive.canonical.com/ubuntu lucid partner
</span></span><span class="line"><span class="cl">deb-src http://archive.canonical.com/ubuntu lucid partner
</span></span></code></pre></div><p>Run <em>apt-get</em> to update the cache of packages.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo apt-get update
</span></span></code></pre></div><p>Then install the dpkg for Sun\Oracle JDK.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo apt-get install sun-java-jdk
</span></span></code></pre></div><p>I will be using <a href="http://www.postgresql.org/">PostgreSQL</a> for my database follow the configuration process described in <a href="https://help.ubuntu.com/community/PostgreSQL">Ubuntu PostgreSQL Installation Howto</a>.</p>
<p>Now create a PostgreSQL user and database, note you will be prompted to enter a password for the bamboo user.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo -u postgres createuser -R -D -S -P -e bamboo
</span></span><span class="line"><span class="cl">$ sudo -u postgres createdb -O bamboo bamboo_db
</span></span></code></pre></div><p>To test this is login will work fine I use the <code>psql</code> command and pass it a hostname to connect to using TCP/IP, this is to ensure our JDBC driver will connect fine.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ psql -h localhost -U bamboo bamboo_db
</span></span><span class="line"><span class="cl">Password <span class="k">for</span> user bamboo:
</span></span><span class="line"><span class="cl">psql <span class="o">(</span>9.0.4<span class="o">)</span>
</span></span><span class="line"><span class="cl">SSL connection <span class="o">(</span>cipher: DHE-RSA-AES256-SHA, bits: 256<span class="o">)</span>
</span></span><span class="line"><span class="cl">Type <span class="s2">&#34;help&#34;</span> <span class="k">for</span> help.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nv">bamboo_db</span><span class="o">=</span>&gt; <span class="se">\q</span>
</span></span></code></pre></div><p>Install tomcat6 using apt, the reason I am using this package rather than the all in from Atlassian is I like my tomcat maintained. Dumping a tomcat in your OS with no one watching your ass is unwise in my opinion.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo apt-get install tomcat6
</span></span></code></pre></div><p>Rather than just popping the war file into <code>/var/lib/tomcat6/webapps</code>, I prefer to externalise the application by extracting the archive into it&rsquo;s own directory under <code>/var/lib</code>. I then configure tomcat to load the application from this location. This is done so that I can guarantee tomcat upgrade won&rsquo;t nuke or otherwise disturb my Bamboo installation.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo mkdir /usr/share/atlassian-bamboo-3.1.1
</span></span><span class="line"><span class="cl">$ <span class="nb">cd</span> /usr/share/atlassian-bamboo-3.1.1
</span></span><span class="line"><span class="cl">$ sudo jar xvf ~/atlassian-bamboo-3.1.1.war
</span></span></code></pre></div><p>Before we start ensure tomcat is shut down, otherwise it mite go and deploy bamboo before we are ready!</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo /etc/init.d/tomcat6 stop
</span></span></code></pre></div><p>Next we need to configure tomcat to use load this web application, to do this we create navigate to the tomcat configuration directory.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ <span class="nb">cd</span> /var/lib/tomcat6/conf/Catalina/localhost
</span></span></code></pre></div><p>Backup the original <code>ROOT.xml</code>, being careful to preserve the permissions.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo cp -ipv ROOT.xml ROOT.bak
</span></span></code></pre></div><p>Edit file named <code>ROOT.xml</code> as root, I will be making my bamboo the ROOT application in this tomcat, in other words served at <code>http://myserver.com/</code>. Put the following content in this file. Note you also need to generate a password for your DB login and enter it where the XXXXXXXX is.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;Context</span> <span class="na">path=</span><span class="s">&#34;/&#34;</span> <span class="na">docBase=</span><span class="s">&#34;/usr/share/atlassian-bamboo-3.1.1&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;Resource</span> <span class="na">name=</span><span class="s">&#34;jdbc/BambooDS&#34;</span> <span class="na">auth=</span><span class="s">&#34;Container&#34;</span> <span class="na">type=</span><span class="s">&#34;javax.sql.DataSource&#34;</span>
</span></span><span class="line"><span class="cl">            <span class="na">username=</span><span class="s">&#34;bamboo&#34;</span>
</span></span><span class="line"><span class="cl">            <span class="na">password=</span><span class="s">&#34;XXXXXXXX&#34;</span>
</span></span><span class="line"><span class="cl">            <span class="na">driverClassName=</span><span class="s">&#34;org.postgresql.Driver&#34;</span>
</span></span><span class="line"><span class="cl">            <span class="na">url=</span><span class="s">&#34;jdbc:postgresql://localhost:5432/bamboo_db&#34;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/Context&gt;</span>
</span></span></code></pre></div><p>Make the <code>ROOT.xml</code> file only readable by the tomcat user to protect the plain text password located within.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo chmod <span class="m">600</span> ROOT.xml
</span></span></code></pre></div><p>Remove the current root web application from tomcats <code>/var/lib/tomcat6/webapps</code> directory.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo rm -r /var/lib/tomcat6/webapps/ROOT
</span></span></code></pre></div><p>Create a data location for bamboo, and change the file permissions so that tomcat6 can write to this location.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo mkdir /var/lib/atlassian-bamboo
</span></span><span class="line"><span class="cl">$ sudo chown tomcat6:tomcat6 /var/lib/atlassian-bamboo
</span></span></code></pre></div><p>Now install the PostgreSQL driver in tomcat, to do this first download it to your home directory then copy it to <code>/usr/share/tomcat6/lib</code> as follows.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ <span class="nb">cd</span> ~/
</span></span><span class="line"><span class="cl">$ wget http://jdbc.postgresql.org/download/postgresql-9.0-801.jdbc4.jar
</span></span><span class="line"><span class="cl">$ sudo cp postgresql-9.0-801.jdbc4.jar /usr/share/tomcat6/lib
</span></span></code></pre></div><p>Modify the <code>/usr/share/atlassian-bamboo-3.1.1/WEB-INF/classes/log4j.properties</code> as root, to correct the location of the log file. This in my opinion needs to be a MUST configure item during there installation process otherwise the log file could end up anywhere, for example the PWD of the executing start up script.</p>
<p>Navigate to the line that looks like this.</p>
<pre tabindex="0"><code>log4j.appender.filelog.File=atlassian-bamboo.log
</code></pre><p>And change it to this.</p>
<pre tabindex="0"><code>log4j.appender.filelog.File=${catalina.base}/logs/atlassian-bamboo.log
</code></pre><p>Also you will need to configure the bamboo.home in <code>/usr/share/atlassian-bamboo-3.1.1/WEB-INF/classes/bamboo-init.properties</code>. This should be changed to the following value. Again this will need to be done as root.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">bamboo.home=/var/lib/atlassian-bamboo
</span></span></code></pre></div><p>Before we start tomcat we need to increase the amount of memory available to tomcat as well as some other params. This is done by editing <code>/etc/default/tomcat6</code>.</p>
<p>Open the file as root and edit to the following line.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="nv">JAVA_OPTS</span><span class="o">=</span><span class="s2">&#34;-Djava.awt.headless=true -Xmx128m&#34;</span>
</span></span></code></pre></div><p>Change it to the following value.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="nv">JAVA_OPTS</span><span class="o">=</span><span class="s2">&#34;-server -XX:MaxPermSize=256m -Djava.awt.headless=true -Xmx512m&#34;</span>
</span></span></code></pre></div><p>Now start tomcat and then tail the server log file.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">$ sudo /etc/init.d/tomcat6 start
</span></span><span class="line"><span class="cl">$ tail -f /var/log/tomcat6/atlassian-bamboo.log
</span></span></code></pre></div><p>Now open your browser and to <code>http://servername:8080/</code> and follow the prompts.</p>
<p>When prompted for data base, select use datasource and specify the following value.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">java:comp/env/jdbc/BambooDS
</span></span></code></pre></div><p>Now I have a nice new <a href="http://atlassian.com">Atlassian</a> Bamboo server ready to build my software, more on how that goes in future posts.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Using Jackson with Apache CXF</title>
      <link>https://www.wolfe.id.au/2011/05/22/using-jackson-with-apache-cxf/</link>
      <pubDate>Sun, 22 May 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/05/22/using-jackson-with-apache-cxf/</guid>
      <description>&lt;p&gt;Whilst working on my Javascript, &lt;a href=&#34;http://www.sencha.com/products/extjs/&#34;&gt;ExtJS 4&lt;/a&gt; and &lt;a href=&#34;http://cxf.apache.org&#34;&gt;Apache CXF&lt;/a&gt; skills I came across a solution to a problem I encountered getting ExtJS to talk to Apache CXF using JSON.&lt;/p&gt;
&lt;p&gt;The basis for this issue revolves around &amp;ldquo;wrapping&amp;rdquo; in JSON and two different schools on what is correct way to encode it. As far as I can see there is the more verbose version which Jettison, the default JSON serialiser in Apache CXF produces, then there is the &amp;ldquo;unwrapped&amp;rdquo; version which the alternate serialiser &lt;a href=&#34;http://jackson.codehaus.org/&#34;&gt;Jackson&lt;/a&gt; produces.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst working on my Javascript, <a href="http://www.sencha.com/products/extjs/">ExtJS 4</a> and <a href="http://cxf.apache.org">Apache CXF</a> skills I came across a solution to a problem I encountered getting ExtJS to talk to Apache CXF using JSON.</p>
<p>The basis for this issue revolves around &ldquo;wrapping&rdquo; in JSON and two different schools on what is correct way to encode it. As far as I can see there is the more verbose version which Jettison, the default JSON serialiser in Apache CXF produces, then there is the &ldquo;unwrapped&rdquo; version which the alternate serialiser <a href="http://jackson.codehaus.org/">Jackson</a> produces.</p>
<p>In my case I chose Jackson the more terse version, this is good for a couple of reasons:</p>
<ul>
<li>It is compatible with ExtJS without any modifications</li>
<li>It is smaller and therefore produces less data on the wire.</li>
</ul>
<p>Also I like the annotations that Jackson comes with, and find it a bit easier to work with than Jettison.</p>
<p>So to enable Jackson I modify my projects <a href="http://maven.apache.org">Maven</a> pom file I add the following dependency.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;dependency&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;groupId&gt;</span>org.codehaus.jackson<span class="nt">&lt;/groupId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;artifactId&gt;</span>jackson-jaxrs<span class="nt">&lt;/artifactId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;version&gt;</span>1.5.7<span class="nt">&lt;/version&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/dependency&gt;</span>
</span></span></code></pre></div><p>In addition to this some changes are required in the spring configuration which houses our RESTful services. In the following excerpt from my spring configuration, I have declared the <em>jsonProvider</em> then set it as one of the providers <em>jaxrs:server</em>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;bean</span> <span class="na">id=</span><span class="s">&#34;jsonProvider&#34;</span> <span class="na">class=</span><span class="s">&#34;org.codehaus.jackson.jaxrs.JacksonJsonProvider&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nt">&lt;jaxrs:server</span> <span class="na">id=</span><span class="s">&#34;restServices&#34;</span> <span class="na">address=</span><span class="s">&#34;/&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;jaxrs:serviceBeans&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;ref</span> <span class="na">bean=</span><span class="s">&#34;projectService&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/jaxrs:serviceBeans&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;jaxrs:providers&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;ref</span> <span class="na">bean=</span><span class="s">&#34;jsonProvider&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/jaxrs:providers&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;jaxrs:features&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;cxf:logging/&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/jaxrs:features&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/jaxrs:server&gt;</span>
</span></span></code></pre></div><p>Once Jackson was enabled my ExtJS JSON driven data stores were functioning perfectly, aside from dates. Jackson&rsquo;s default behaviour for serialisation of a <em>java.util.Date</em> is to convert it to milliseconds since EPOC. To do this I used a feature in spring known as compound property names, this enabled me to instantiate an instance of the mapper, then override the <em>serializationConfig.dateFormat</em> to configure the mapper to produce ISO 8601 dates. This shown in the following excerpt which illustrates the updated <em>jsonProvider</em> using the reconfigured <em>jacksonMapper</em>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;bean</span> <span class="na">id=</span><span class="s">&#34;jacksonMapper&#34;</span> <span class="na">class=</span><span class="s">&#34;org.codehaus.jackson.map.ObjectMapper&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;property</span> <span class="na">name=</span><span class="s">&#34;serializationConfig.dateFormat&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;bean</span> <span class="na">class=</span><span class="s">&#34;java.text.SimpleDateFormat&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;constructor-arg</span> <span class="na">value=</span><span class="s">&#34;yyyy-MM-dd&#39;T&#39;HH:mm:ss.SZ&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/bean&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/property&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/bean&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nt">&lt;bean</span> <span class="na">id=</span><span class="s">&#34;jsonProvider&#34;</span> <span class="na">class=</span><span class="s">&#34;org.codehaus.jackson.jaxrs.JacksonJsonProvider&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="na">p:mapper-ref=</span><span class="s">&#34;jacksonMapper&#34;</span><span class="nt">/&gt;</span>
</span></span></code></pre></div><p>The result of this is shown in the following JSON sample which illustrates how a project object containing some dates is encoded.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-javascript" data-lang="javascript"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nx">success</span><span class="o">:</span> <span class="kc">true</span>
</span></span><span class="line"><span class="cl">    <span class="nx">message</span><span class="o">:</span> <span class="s2">&#34;Project found.&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="nx">data</span><span class="o">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nx">artifactId</span><span class="o">:</span> <span class="s2">&#34;bobtheapp&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="nx">groupId</span><span class="o">:</span> <span class="s2">&#34;au.id.wolfe.bta&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="nx">inceptionYear</span><span class="o">:</span> <span class="s2">&#34;2011&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="nx">organization</span><span class="o">:</span> <span class="kc">null</span>
</span></span><span class="line"><span class="cl">        <span class="nx">developers</span><span class="o">:</span> <span class="kc">null</span>
</span></span><span class="line"><span class="cl">        <span class="nx">dateAdded</span><span class="o">:</span> <span class="s2">&#34;2011-05-21T20:34:15.862+1000&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="nx">dateUpdated</span><span class="o">:</span> <span class="s2">&#34;2011-05-21T20:34:15.862+1000&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="nx">version</span><span class="o">:</span> <span class="s2">&#34;1.0.0&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>So after another journey off track back to hacking on my project.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Apache CXF Logging Configuration</title>
      <link>https://www.wolfe.id.au/2011/05/20/apache-cxf-logging-configuration/</link>
      <pubDate>Fri, 20 May 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/05/20/apache-cxf-logging-configuration/</guid>
      <description>&lt;p&gt;Configuring logging in &lt;a href=&#34;http://cxf.apache.org&#34;&gt;Apache CXF&lt;/a&gt; can be confusing at first, in my view this is further compounded by Apache Tomcat&amp;rsquo;s logging architecture.&lt;/p&gt;
&lt;p&gt;In my cxf based projects the first step is to use the &lt;a href=&#34;http://cxf.apache.org/docs/debugging-and-logging.html#DebuggingandLogging-UsingSLF4JInsteadofjava.util.logging%28since2.2.8%29&#34;&gt;logging configuration available in 2.2.8 or later&lt;/a&gt;. This enables you to configure your logger of choice for the entire CXF stack. Needless to say I really like this feature as I am not a big fan of java.util.logging as it&amp;rsquo;s configuration is not very intuitive.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Configuring logging in <a href="http://cxf.apache.org">Apache CXF</a> can be confusing at first, in my view this is further compounded by Apache Tomcat&rsquo;s logging architecture.</p>
<p>In my cxf based projects the first step is to use the <a href="http://cxf.apache.org/docs/debugging-and-logging.html#DebuggingandLogging-UsingSLF4JInsteadofjava.util.logging%28since2.2.8%29">logging configuration available in 2.2.8 or later</a>. This enables you to configure your logger of choice for the entire CXF stack. Needless to say I really like this feature as I am not a big fan of java.util.logging as it&rsquo;s configuration is not very intuitive.</p>
<p>First step is to change all logging in CXF to my logging stack of choice which is <a href="http://www.slf4j.org/">sl4j</a> and <a href="http://logback.qos.ch/">logback</a>. As per the CXF documentation I added a file named <em>org.apache.cxf.Logger</em> in my maven web application project located at <em>src/main/resources/META-INF/cxf</em>. This file contained just the following string.
{% highlight text %}
org.apache.cxf.common.logging.Slf4jLogger
{% endhighlight %}</p>
<p>I also add the following dependencies to my maven projects <em>pom.xml</em>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="c">&lt;!-- This is to override spring&#39;s dependence on apache commons logging --&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;dependency&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;groupId&gt;</span>org.slf4j<span class="nt">&lt;/groupId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;artifactId&gt;</span>jcl-over-slf4j<span class="nt">&lt;/artifactId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;version&gt;</span>1.5.11<span class="nt">&lt;/version&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/dependency&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nt">&lt;dependency&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;groupId&gt;</span>org.slf4j<span class="nt">&lt;/groupId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;artifactId&gt;</span>slf4j-api<span class="nt">&lt;/artifactId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;version&gt;</span>1.5.11<span class="nt">&lt;/version&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/dependency&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nt">&lt;dependency&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;groupId&gt;</span>ch.qos.logback<span class="nt">&lt;/groupId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;artifactId&gt;</span>logback-core<span class="nt">&lt;/artifactId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;version&gt;</span>0.9.19<span class="nt">&lt;/version&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/dependency&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nt">&lt;dependency&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;groupId&gt;</span>ch.qos.logback<span class="nt">&lt;/groupId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;artifactId&gt;</span>logback-classic<span class="nt">&lt;/artifactId&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;version&gt;</span>0.9.19<span class="nt">&lt;/version&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/dependency&gt;</span>
</span></span></code></pre></div><p>And I configure a basic logback configuration within in <em>logback.xml</em> located in my web projects <em>src/main/resources</em> directory.</p>
<p>The main aims of this are:</p>
<ul>
<li>Summary log file with only warnings and errors.</li>
<li>Trace file containing all web service messages.</li>
</ul>
<p>The main reason for separating WS messages into a trace file is to keep the application log file down to an easy to handle size. This is especially helpful for systems which handle large web service messages. Also you may want to roll the trace file at a different interval than the application log.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;configuration&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- Just used while running in process while developing --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;appender</span> <span class="na">name=</span><span class="s">&#34;STDOUT&#34;</span>
</span></span><span class="line"><span class="cl">              <span class="na">class=</span><span class="s">&#34;ch.qos.logback.core.ConsoleAppender&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;encoder&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;pattern&gt;</span>
</span></span><span class="line"><span class="cl">                %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;/pattern&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/encoder&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/appender&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- This is just for contents of web service operations and can get quite large --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;appender</span> <span class="na">name=</span><span class="s">&#34;WSLOGFILE&#34;</span> <span class="na">class=</span><span class="s">&#34;ch.qos.logback.core.rolling.RollingFileAppender&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;file&gt;</span>${catalina.home}/logs/mpw-message-trace.log<span class="nt">&lt;/file&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;rollingPolicy</span> <span class="na">class=</span><span class="s">&#34;ch.qos.logback.core.rolling.TimeBasedRollingPolicy&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="c">&lt;!-- daily rollover --&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;fileNamePattern&gt;</span>${catalina.home}/logs/mpw-message-trace.%d{yyyy-MM-dd}.log<span class="nt">&lt;/fileNamePattern&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/rollingPolicy&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;encoder&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;pattern&gt;</span>
</span></span><span class="line"><span class="cl">                %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;/pattern&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/encoder&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/appender&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- Used for application logging to which when deployed is quite terse and restricted to warnings typically --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;appender</span> <span class="na">name=</span><span class="s">&#34;APPLOGFILE&#34;</span> <span class="na">class=</span><span class="s">&#34;ch.qos.logback.core.rolling.RollingFileAppender&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;file&gt;</span>${catalina.home}/logs/mpw.log<span class="nt">&lt;/file&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;rollingPolicy</span> <span class="na">class=</span><span class="s">&#34;ch.qos.logback.core.rolling.TimeBasedRollingPolicy&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="c">&lt;!-- daily rollover --&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;fileNamePattern&gt;</span>${catalina.home}/logs/mpw.%d{yyyy-MM-dd}.log<span class="nt">&lt;/fileNamePattern&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/rollingPolicy&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;encoder&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;pattern&gt;</span>
</span></span><span class="line"><span class="cl">                %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;/pattern&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/encoder&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/appender&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;logger</span> <span class="na">name=</span><span class="s">&#34;org.apache.cxf&#34;</span> <span class="na">level=</span><span class="s">&#34;INFO&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;logger</span> <span class="na">name=</span><span class="s">&#34;org.springframework&#34;</span> <span class="na">level=</span><span class="s">&#34;INFO&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- Set additivity to false when deployed --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;logger</span> <span class="na">name=</span><span class="s">&#34;org.apache.cxf.interceptor&#34;</span> <span class="na">additivity=</span><span class="s">&#34;true&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;appender-ref</span> <span class="na">ref=</span><span class="s">&#34;WSLOGFILE&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;/logger&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;root</span> <span class="na">level=</span><span class="s">&#34;INFO&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="c">&lt;!-- STDOUT is normally this is removed when deployed as it ends up in tomcat server logs --&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;appender-ref</span> <span class="na">ref=</span><span class="s">&#34;STDOUT&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;appender-ref</span> <span class="na">ref=</span><span class="s">&#34;APPLOGFILE&#34;</span><span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/root&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/configuration&gt;</span>
</span></span></code></pre></div><p>For a more complete sample you can look over the sources to <a href="https://github.com/wolfeidau/mvn-project-wizard/">maven-project-wizard</a> on <a href="http://github.com">github</a>.</p>
<p>In addition to this is required configuration at the operating system level to compress and archive these log file. I will cover this in a later post.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Building Apache CXF web service clients namespace mapping</title>
      <link>https://www.wolfe.id.au/2011/05/07/building-apache-cxf-web-service-clients-namespace-mapping/</link>
      <pubDate>Sat, 07 May 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/05/07/building-apache-cxf-web-service-clients-namespace-mapping/</guid>
      <description>&lt;p&gt;Recently I have been doing quite a bit of work around &lt;a href=&#34;http://cxf.apache.org/&#34;&gt;Apache CXF&lt;/a&gt;, both on the server side and the client side. Over the next few weeks I will post some of the tricks I have learnt.&lt;/p&gt;
&lt;p&gt;In this post I will summarise one of the &lt;a href=&#34;http://cxf.apache.org/docs/maven-cxf-codegen-plugin-wsdl-to-java.html&#34;&gt;cxf-codegen-plugin&lt;/a&gt; options I used to assist me while consuming WCF web services using Apache CXF. When dealing with WCF based services, and indeed any WSDL which uses more than one namespace it is handy knowing how to control name spaces and how these translate into packages in your java stubs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I have been doing quite a bit of work around <a href="http://cxf.apache.org/">Apache CXF</a>, both on the server side and the client side. Over the next few weeks I will post some of the tricks I have learnt.</p>
<p>In this post I will summarise one of the <a href="http://cxf.apache.org/docs/maven-cxf-codegen-plugin-wsdl-to-java.html">cxf-codegen-plugin</a> options I used to assist me while consuming WCF web services using Apache CXF. When dealing with WCF based services, and indeed any WSDL which uses more than one namespace it is handy knowing how to control name spaces and how these translate into packages in your java stubs.</p>
<p>When consuming WCF services you may encounter either the default namespace for services which is <em>tempura.org</em>, or more commonly, one more custom name spaces configured by the developer.</p>
<p>Below is an example of using selective namespace mappings by specifying the -p option in the cxf-codegen-plugin. This switch instructs wsdl2java to map all xml objects in a given namespace into the java package supplied.</p>
<p>So if we had a WCF based registration service which included the following namespaces:</p>
<ul>
<li><a href="http://schemas.datacontract.org/2004/07/System">http://schemas.datacontract.org/2004/07/System</a></li>
<li><a href="http://schemas.datacontract.org/2004/07/Wolfeidau.Model">http://schemas.datacontract.org/2004/07/Wolfeidau.Model</a></li>
<li><a href="http://wolfe.id.au/services/">http://wolfe.id.au/services/</a></li>
</ul>
<p>Only the  <a href="http://wolfe.id.au/services/">http://wolfe.id.au/services/</a> would be mapped into <em>au.id.wolfe.services.registration</em> java package.</p>
<p>The reason this is done is typically to avoid name clashes and issues with overlapping data objects used by more than one service in the one namespace.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="c">&lt;!-- Generate client using WSDL --&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;plugin&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;groupId&gt;</span>org.apache.cxf<span class="nt">&lt;/groupId&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;artifactId&gt;</span>cxf-codegen-plugin<span class="nt">&lt;/artifactId&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;version&gt;</span>2.4.0<span class="nt">&lt;/version&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;executions&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;execution&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;id&gt;</span>generate-sources<span class="nt">&lt;/id&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;phase&gt;</span>generate-sources<span class="nt">&lt;/phase&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;configuration&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;sourceRoot&gt;</span>${basedir}/target/generated/src/main/java<span class="nt">&lt;/sourceRoot&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;wsdlOptions&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;wsdlOption&gt;</span>
</span></span><span class="line"><span class="cl">              <span class="nt">&lt;wsdl&gt;</span>http://wolfe.id.au/services/registration?WSDL<span class="nt">&lt;/wsdl&gt;</span>
</span></span><span class="line"><span class="cl">              <span class="nt">&lt;serviceName&gt;</span>RegistrationService<span class="nt">&lt;/serviceName&gt;</span>
</span></span><span class="line"><span class="cl">              <span class="nt">&lt;extraargs&gt;</span>
</span></span><span class="line"><span class="cl">                <span class="nt">&lt;extraarg&gt;</span>-client<span class="nt">&lt;/extraarg&gt;</span>
</span></span><span class="line"><span class="cl">                <span class="nt">&lt;extraarg&gt;</span>-verbose<span class="nt">&lt;/extraarg&gt;</span>
</span></span><span class="line"><span class="cl">                <span class="nt">&lt;extraarg&gt;</span>-p<span class="nt">&lt;/extraarg&gt;</span>
</span></span><span class="line"><span class="cl">                <span class="nt">&lt;extraarg&gt;</span>http://wolfe.id.au/services/=au.id.wolfe.services.registration<span class="nt">&lt;/extraarg&gt;</span>
</span></span><span class="line"><span class="cl">              <span class="nt">&lt;/extraargs&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;/wsdlOption&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;/wsdlOptions&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/configuration&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;goals&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;goal&gt;</span>wsdl2java<span class="nt">&lt;/goal&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/goals&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/execution&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/executions&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/plugin&gt;</span>
</span></span></code></pre></div><p>As most of the samples on the Apache CXF website are in opinion way to simplistic, I am putting together some more extensive client and server samples which I will post up on <a href="http://github.com/wolfeidau">wolfeidau Github</a> soon.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Moving to Jekyll and Disqus</title>
      <link>https://www.wolfe.id.au/2011/03/06/moving-to-jekyll-and-disqus/</link>
      <pubDate>Sun, 06 Mar 2011 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2011/03/06/moving-to-jekyll-and-disqus/</guid>
      <description>&lt;p&gt;First post from my newly migrated blog reviewing migration from wordpress to Jekyll and Disqus.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://github.com/mojombo/jekyll&#34;&gt;Jekyll&lt;/a&gt; for site generation.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://sass-lang.com/&#34;&gt;Sass&lt;/a&gt; for simplified css goodness.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://pygments.org/&#34;&gt;Pygments&lt;/a&gt; code highlighting.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://disqus.com/&#34;&gt;Disqus&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Following some of the sites which use Jekyll, I generated the skeleton and began building my site using Jekyll. After trialing some of the code highlighting options I selected pygments, this was to ensure my code samples looked good.&lt;/p&gt;
&lt;p&gt;Once the base was configured and working I moved on to add a plugin to utilise Sass in my website. Below is source to my &lt;em&gt;_plugins/sass_converter.rb&lt;/em&gt;, this converts any scss (CSS like sass syntax) files to css.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>First post from my newly migrated blog reviewing migration from wordpress to Jekyll and Disqus.</p>
<ul>
<li><a href="http://github.com/mojombo/jekyll">Jekyll</a> for site generation.</li>
<li><a href="http://sass-lang.com/">Sass</a> for simplified css goodness.</li>
<li><a href="http://pygments.org/">Pygments</a> code highlighting.</li>
<li><a href="http://disqus.com/">Disqus</a></li>
</ul>
<p>Following some of the sites which use Jekyll, I generated the skeleton and began building my site using Jekyll. After trialing some of the code highlighting options I selected pygments, this was to ensure my code samples looked good.</p>
<p>Once the base was configured and working I moved on to add a plugin to utilise Sass in my website. Below is source to my <em>_plugins/sass_converter.rb</em>, this converts any scss (CSS like sass syntax) files to css.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ruby" data-lang="ruby"><span class="line"><span class="cl"><span class="k">module</span> <span class="nn">Jekyll</span>
</span></span><span class="line"><span class="cl">  <span class="c1"># Sass plugin to convert .scss to .css</span>
</span></span><span class="line"><span class="cl">  <span class="c1">#</span>
</span></span><span class="line"><span class="cl">  <span class="c1"># Note: This is configured to use the new css like syntax available in sass.</span>
</span></span><span class="line"><span class="cl">  <span class="nb">require</span> <span class="s1">&#39;sass&#39;</span>
</span></span><span class="line"><span class="cl">  <span class="k">class</span> <span class="nc">SassConverter</span> <span class="o">&lt;</span> <span class="no">Converter</span>
</span></span><span class="line"><span class="cl">    <span class="n">safe</span> <span class="kp">true</span>
</span></span><span class="line"><span class="cl">    <span class="n">priority</span> <span class="ss">:low</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">     <span class="k">def</span> <span class="nf">matches</span><span class="p">(</span><span class="n">ext</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">      <span class="n">ext</span> <span class="o">=~</span> <span class="sr">/scss/i</span>
</span></span><span class="line"><span class="cl">    <span class="k">end</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="k">def</span> <span class="nf">output_ext</span><span class="p">(</span><span class="n">ext</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">      <span class="s2">&#34;.css&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="k">end</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="k">def</span> <span class="nf">convert</span><span class="p">(</span><span class="n">content</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">      <span class="k">begin</span>
</span></span><span class="line"><span class="cl">        <span class="nb">puts</span> <span class="s2">&#34;Performing Sass Conversion.&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="n">engine</span> <span class="o">=</span> <span class="no">Sass</span><span class="o">::</span><span class="no">Engine</span><span class="o">.</span><span class="n">new</span><span class="p">(</span><span class="n">content</span><span class="p">,</span> <span class="ss">:syntax</span> <span class="o">=&gt;</span> <span class="ss">:scss</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">        <span class="n">engine</span><span class="o">.</span><span class="n">render</span>
</span></span><span class="line"><span class="cl">      <span class="k">rescue</span> <span class="no">StandardError</span> <span class="o">=&gt;</span> <span class="n">e</span>
</span></span><span class="line"><span class="cl">        <span class="nb">puts</span> <span class="s2">&#34;!!! SASS Error: &#34;</span> <span class="o">+</span> <span class="n">e</span><span class="o">.</span><span class="n">message</span>
</span></span><span class="line"><span class="cl">      <span class="k">end</span>
</span></span><span class="line"><span class="cl">    <span class="k">end</span>
</span></span><span class="line"><span class="cl">  <span class="k">end</span>
</span></span><span class="line"><span class="cl"><span class="k">end</span>
</span></span></code></pre></div>]]></content:encoded>
    </item>
    
    <item>
      <title>Installing Ruby with RVM on Ubuntu 10.10</title>
      <link>https://www.wolfe.id.au/2010/12/06/installing-ruby-with-rvm-on-ubuntu-10.10/</link>
      <pubDate>Mon, 06 Dec 2010 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2010/12/06/installing-ruby-with-rvm-on-ubuntu-10.10/</guid>
      <description>&lt;p&gt;Been installing &lt;a href=&#34;http://rvm.beginrescueend.com/&#34;&gt;RVM&lt;/a&gt; on anything that runs *nix lately, ran into some issues with using this great tool on &lt;a href=&#34;http://www.ubuntu.com/&#34;&gt;Ubuntu 10.10&lt;/a&gt;. After a bit of reading I discovered a couple of solutions, either I could build and install some of these libraries using RVM, or I could locate and install the dev versions of these libraries in Ubuntu. As I like my libraries updated for security issues and such like I took the later option.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Been installing <a href="http://rvm.beginrescueend.com/">RVM</a> on anything that runs *nix lately, ran into some issues with using this great tool on <a href="http://www.ubuntu.com/">Ubuntu 10.10</a>. After a bit of reading I discovered a couple of solutions, either I could build and install some of these libraries using RVM, or I could locate and install the dev versions of these libraries in Ubuntu. As I like my libraries updated for security issues and such like I took the later option.</p>
<p>So after sniffing out all the dependencies I pulled together a brief run down on how to get ruby, ruby gems and some other commonly used gems built and running on this version of Ubuntu.</p>
<p>Install the development suite for Ubuntu.</p>
<pre tabindex="0"><code>sudo apt-get install build-essential
</code></pre><p>Install the Git version control package and curl http client utility</p>
<pre tabindex="0"><code>sudo apt-get install git-core
sudo apt-get install curl
</code></pre><p>Install the Git version control package and curl http client utility</p>
<p>Install the development packages which ruby and it&rsquo;s utilities depend on.</p>
<pre tabindex="0"><code>sudo apt-get install zlib1g-dev
sudo apt-get install libssl-dev
sudo apt-get install libreadline-dev
</code></pre><p>Install the packages required to build nokogiri, which is an xml library used by various gems including RSpec</p>
<pre tabindex="0"><code>sudo apt-get install libxml2-dev
sudo apt-get install libxslt-dev
</code></pre><p>Install the command line SQLLite and development package.</p>
<pre tabindex="0"><code>sudo apt-get install sqlite3 libsqlite3-dev
</code></pre><p>Install Mysql client and development package.</p>
<pre tabindex="0"><code>sudo apt-get install mysql-client libmysqlclient-dev
</code></pre><p>Optionally install MySQL Server.</p>
<pre tabindex="0"><code>sudo apt-get install mysql-server
</code></pre><p>Run the following command to install RVM</p>
<pre tabindex="0"><code>bash &lt; &lt;( curl http://rvm.beginrescueend.com/releases/rvm-install-head )
</code></pre><p>Append the following two lines to end your .bashrc file.</p>
<pre tabindex="0"><code># This loads RVM into a shell session.
[[ -s &#34;$HOME/.rvm/scripts/rvm&#34; ]] &amp;&amp; source &#34;$HOME/.rvm/scripts/rvm&#34;
</code></pre><p>Source the .bashrc to make put the RVM command(s) in your path.</p>
<pre tabindex="0"><code>source .bashrc
</code></pre><p>Install ruby 1.9.2, this compiles the runtime and then installs it.</p>
<pre tabindex="0"><code>rvm install 1.9.2
</code></pre><p>Enable the ruby 1.9.2 runtime as the default.</p>
<pre tabindex="0"><code>rvm use 1.9.2 --default
</code></pre><p>List your default gems</p>
<pre tabindex="0"><code>gem list
</code></pre><p>Install a few handy gems</p>
<pre tabindex="0"><code>gem install rails nokogiri sqlite3-ruby mysql rspec
</code></pre><p>This should give you the basic environment required to build a very basic rails application and tests of course as we all love tests!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Maven, where I start</title>
      <link>https://www.wolfe.id.au/2010/09/02/maven-where-i-start/</link>
      <pubDate>Thu, 02 Sep 2010 00:00:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2010/09/02/maven-where-i-start/</guid>
      <description>&lt;p&gt;More often than not today I start my Java projects using an &lt;a href=&#34;http://maven.apache.org&#34;&gt;Apache Maven&lt;/a&gt; archetype. Whether it be for an &lt;a href=&#34;http://cxf.apache.org&#34;&gt;Apache CXF&lt;/a&gt; based web service, a simple web application or some sort of service most are started from an archetype.&lt;/p&gt;
&lt;p&gt;However pretty much every project I need to add my secret project ingredients, this typically starts with a couple of key plug-ins:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://maven.apache.org/plugins/maven-checkstyle-plugin/&#34;&gt;maven-checkstyle-plugin&lt;/a&gt; – I really like to know when my code is a mess, even if it does annoy some people.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://code.google.com/p/maven-license-plugin/&#34;&gt;maven-license-plugin&lt;/a&gt; – Again I like stuff neat so getting everything tagged under a license is pretty important.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I have extracted a sample of my default configuration to illustrate how these plug-ins are configured in the pom.xml.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>More often than not today I start my Java projects using an <a href="http://maven.apache.org">Apache Maven</a> archetype. Whether it be for an <a href="http://cxf.apache.org">Apache CXF</a> based web service, a simple web application or some sort of service most are started from an archetype.</p>
<p>However pretty much every project I need to add my secret project ingredients, this typically starts with a couple of key plug-ins:</p>
<ul>
<li><a href="http://maven.apache.org/plugins/maven-checkstyle-plugin/">maven-checkstyle-plugin</a> – I really like to know when my code is a mess, even if it does annoy some people.</li>
<li><a href="http://code.google.com/p/maven-license-plugin/">maven-license-plugin</a> – Again I like stuff neat so getting everything tagged under a license is pretty important.</li>
</ul>
<p>I have extracted a sample of my default configuration to illustrate how these plug-ins are configured in the pom.xml.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;build&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;plugins&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;plugin&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;groupId&gt;</span>com.mycila.maven-license-plugin<span class="nt">&lt;/groupId&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;artifactId&gt;</span>maven-license-plugin<span class="nt">&lt;/artifactId&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;configuration&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;properties&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;owner&gt;</span>Mark Wolfe<span class="nt">&lt;/owner&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;year&gt;</span>${project.inceptionYear}<span class="nt">&lt;/year&gt;</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&lt;email&gt;</span>mark.wolfe@wolfe.id.au<span class="nt">&lt;/email&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;/properties&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;header&gt;</span>${basedir}/src/main/etc/header.txt<span class="nt">&lt;/header&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/configuration&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;/plugin&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/plugins&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/build&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;reporting&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;plugins&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;plugin&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;groupId&gt;</span>org.apache.maven.plugins<span class="nt">&lt;/groupId&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;artifactId&gt;</span>maven-checkstyle-plugin<span class="nt">&lt;/artifactId&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;configuration&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;consoleOutput&gt;</span>true<span class="nt">&lt;/consoleOutput&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;configLocation&gt;</span>${basedir}/src/main/etc/checkstyle.xml<span class="nt">&lt;/configLocation&gt;</span>
</span></span><span class="line"><span class="cl">          <span class="nt">&lt;enableRulesSummary&gt;</span>true<span class="nt">&lt;/enableRulesSummary&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&lt;/configuration&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;/plugin&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/plugins&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/reporting&gt;</span>
</span></span></code></pre></div><p>At a minimum that gets me up and running with plug-ins, next thing I like to tidy up is the header of the pom file. This typically starts by updating all the developer information setting my website URL and adding my details.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;inceptionYear&gt;</span>2010<span class="nt">&lt;/inceptionYear&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;organization&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;name&gt;</span>Mark Wolfe<span class="nt">&lt;/name&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;url&gt;</span>http://www.wolfe.id.au/<span class="nt">&lt;/url&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/organization&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;developers&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;developer&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;name&gt;</span>Mark Wolfe<span class="nt">&lt;/name&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;email&gt;</span>mark@wolfe.id.au<span class="nt">&lt;/email&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;timezone&gt;</span>+10<span class="nt">&lt;/timezone&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;roles&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;role&gt;</span>architect<span class="nt">&lt;/role&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;role&gt;</span>developer<span class="nt">&lt;/role&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/roles&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/developer&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/developers&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;licenses&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;license&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;name&gt;</span>Apache 2<span class="nt">&lt;/name&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;url&gt;</span>http://www.apache.org/licenses/LICENSE-2.0.txt<span class="nt">&lt;/url&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;distribution&gt;</span>repo<span class="nt">&lt;/distribution&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;comments&gt;</span>A business-friendly OSS license<span class="nt">&lt;/comments&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/license&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/licenses&gt;</span>
</span></span></code></pre></div><p>So that pretty much completes my maven kick start configuration post, for more detailed example see my <a href="https://github.com/wolfeidau/fxAssetman">fxAssetman</a> muck around project on github.</p>
]]></content:encoded>
    </item>
    
  </channel>
</rss>
