Improving your error handling in Rust

This is another blog in my fledgling series on Rust as I learn the language. In this blog I touch on a few cool crates that can improve your error handling in Rust. It follows nicely from my prior post on logging in Rust. As with that post, this is just an introductory look at error handling. There is a lot more one can learn about error handling (in Rust or otherwise) than I will cover here. Here I will just touch on a few crates that can get you started. Maybe in future posts I will get into more detail on other topics, but there are also good blogs out there already (e.g. this). ...

April 13, 2024 · 5 min · Ray Suliteanu

Logging in a Rust Application

This blog post is part of a series I’m developing documenting my evolution as a Rust developer. Currently I’m a newbie, but I’m making progress. As a newbie, please feel free in the comments to elaborate on what I might be doing wrong, could do better or is not “canonical” Rust. Logging is obviously a key aspect of a production-ready application. While one could use println! or dbg! or similar Rust macros to achieve something similar, they are not really a replacement for a real logging framework. In fact, particularly for (long running) “services” as opposed to CLIs, many architecture/coding standards prohibit use of the equivalent to println! in whatever language you’re using. I myself have set such standards on projects I’ve lead. ...

April 7, 2024 · 4 min · Ray Suliteanu

There's No Such Thing as 'Regression Testing'

How would you define “regression testing”? According to Wikipedia, Regression testing (rarely non-regression testing[1]) is re-running functional and non-functional tests to ensure that previously developed and tested software still performs after a change. In the modern age of test automation and continuous integration, tests are run (or should be) automatically by the build system. Whether testing functional or non-functional features, there really are only three categories of tests to run: Unit tests Integration tests System tests Let’s discuss each in turn. ...

June 10, 2023 · 5 min · Ray Suliteanu

Building cloud software? Don't forget about this ...

When developers set out to architect a piece of software — whether a brand new green-field project or rearchitecting an existing product — the typical considerations are things like should I use microservices and how many how will these microservices communicate how will data persist what guarantees must be provided for messages flowing through the system what’s encrypted, where and when etc. Boxes and lines start appearing on whiteboards (physical or virtual) … here are the N microservices, we’ll use gRPC here, we’ll use Kafka there and MongoDb will save everything. We’ll deploy to Kubernetes, set up deployments with HPA, have at least 3 instances so our brain doesn’t get split, and monitor with the EFK stack. Great! Let’s write some code. ...

March 22, 2021 · 5 min · Ray Suliteanu

How to "group by" using Java Stream API

Recently I was trying to do essentially a “map-reduce” using the Java Stream API … counting the number of occurrences of words in some input. This wasn’t for some huge “big data” input set. Using Java Stream API was sufficient. But the Stream API doesn’t have a groupBy() operation. While it does have map() and reduce() I couldn’t add a groupBy() … at least not directly. Since it was not obvious I thought I’d write a quick post on how to do it. ...

March 12, 2021 · 4 min · Ray Suliteanu

Java's fork-join framework

Since Java 7, the JDK includes a set of classes implementing a fork-join pattern. This is an approach decomposing work into multiple tasks that can be executed in parallel. Java provides ForkJoinPool and ForkJoinTask as the two primary classes implementing the approach. This post will cover an example of using these, by converting a Mergesort implementation from a recursive implementation to one using fork-join. Mergesort is a classic divide-and-conquer approach to sorting. The data to be sorted are split into two halves, and those halves each are split into two until the data can no longer be split. Then the resulting arrays are merged and sorted. ...

January 21, 2021 · 5 min · Ray Suliteanu

Using Protocol Buffers To Serialize To Off-Heap Memory

In my previous post about Using off-heap memory in Java programs I showed how to set up a memory-mapped file. Now that we have a memory-mapped file, let’s write something to the file. There are different approaches to how to serialize Java objects obviously including the built-in Java serialization. But in this post I’m going to use Protocol Buffers (AKA protobuf) for serialization, because why not? It will also give an opportunity to show how to automate the use of protobuf in your automated build using Gradle. ...

January 2, 2021 · 4 min · Ray Suliteanu

Using off-heap memory in Java programs

One of the nice things about modern programming languages is Garbage Collection. As a developer you don’t have to worry much about allocating and freeing memory for your objects. With Java you just ’new’ your class and voila a new instance of the class. And when the instance is no longer referenced, Java will take care of freeing the memory. When you create objects this way, the JVM allocates memory from ‘heap’ memory — memory it manages for you. ...

January 1, 2021 · 3 min · Ray Suliteanu

Bootiful ksqlDb part 2 --- creating a stream

In this post I will explore the ksqlDb API for creating streams. This is the second post about using ksqlDb with Spring Boot. Read the first post here. A stream in ksqlDb is analogous to a stream in Kafka Streams. One difference is how you create the stream. As described in the first post, a stream in Kafka Streams is created programmatically with an API. StreamsBuilder streamsBuilder = new StreamsBuilder(); KStream<Integer, Order> ordersStream = streamsBuilder.stream("orders"); ordersStream .filter((key, value) -> value.orderType == Electronics) .groupByKey() .aggregate(() -> 0.0f, (key, value, sum) -> sum += value.orderAmount) .toStream() .to("output"); Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, new Properties()); kafkaStreams.start(); ksqlDb uplevels this to a more familiar SQL syntax. Here’s an example from the ksqlDb quickstart. ...

December 31, 2020 · 4 min · Ray Suliteanu

Separating integration tests from unit tests

Update 2024-12-15 This blog is obviously very OO-centric, focusing on the notion of a class as the smallest “unit”. Obviously, in non-OO languages, what a “unit” is will vary e.g. it could be a compilation unit. But the idea still holds, that you are trying to test individual units of behavior without pulling in dependencies that may result in longer startup time, failures in the dependencies (which may be out of your control), or code that doesn’t exist yet (for TDD). ...

December 30, 2020 · 4 min · Ray Suliteanu