rust code snippet — referencing toml values within a rust application

Listing 1: Rust code referencing some TOML data

I have been working with Rust for a few years now, primarily during my retirement. I’ve authored a few personal tools with it, and I’ve come to prefer working with it rather than the latest C++ compilers from GCC or Clang/LLVM. I find it expressive in ways that C++ isn’t. But Rust, like C++, isn’t perfect and it does have its quirks.

Listing 2: Opening TOML section

One of those quirks I first encountered involved an easy way to access elements in the Cargo.toml file within the Rust application itself. As I moved deeper into Rust I turned to a number of books on the language. One of those books showed how Rust worked by re-writing certain Unix/Linux command line tools in Rust. A subset of that book was command line argument processing, such as passing a version flag to the utility to check the version of the tool. I quickly noticed that the example Rust code duplicated information that was also defined in the toml file, such as the application’s name, version, author(s), and description. From decades of prior software experience the last thing you want to do is duplicate information/data between different files. Instead you want to define data in one location, in one file, and then reference that everywhere in the application and system where you need it. After a bit of searching I found one way to reference that toml information; see listing 1 above referencing the first four definitions in listing 2’s [package] section.

There’s probably a better way to reference toml information, but I used Rust’s built-in environment macros to reference this data because I learned that when you run a Rust application, that toml information is placed in the running Rust application’s environment as environmental variables. For me this is good enough.

By the way, this post marks my use of the Visual Studio Code plugin CodeSnap to create a visual snapshot of code. I’ve grown tired of trying to use WordPress’ built-in code support. The reference page for how to use it has disappeared, and even when I have used it, I can’t get decent code syntax highlighting. So this is probably what I’ll use from now on.

Links

Rust Environmental Variables — https://doc.rust-lang.org/cargo/reference/environment-variables.html

gpt4all on an m1 macbook pro

I’ve been tinkering a bit with large language models small enough to run locally on my M1 MacBook Pro without having to reach out across the internets. Local LLMs means not having to send anything across that might be collected, and thus losing control on whatever I type in (not that whatever I type in is worth collecting…).

You can go look at GPT4All here: https://gpt4all.io/index.html

The application, when first installed, can’t do anything. You need to begin to add LLMs via a curated selection it presents. These are the eleven currently listed.

  1. Wizard V1.1
  2. GPT4All Falcon (the one I’m currently using)
  3. Hermes
  4. ChatGPT-3.5 Turbo
  5. ChatGPT-4
  6. Snoozy
  7. Mini Orca
  8. Mini Orca (small)
  9. Mini Orca (large)
  10. Wizard uncensorted
  11. Llama-2-7B Chat

I’ve barely scratched the surface, having just installed the application and then selected a given model. My criterial right now for selecting a model is that there is no link back to the cloud and that it can run in the “limited” 16 GiB of memory on my MacBook. GPT4All Falcon, for example, will run in 8 GiB.

If you stop and think about the resources required by Falcon it suddenly becomes apparent that it could run on an Apple iPhone if the iPhone had enough memory. Today’s contemporary iPhones certainly have the processor horsepower and more than enough local storage to hold such a model. The problem is that the iPhone 15, Apple’s latest device, doesn’t. Perhaps the iPad Pros running with either the M1 or M2 has enough, but I know the iPhones do not.

Why would I want that much memory? To run one of the LLMs locally. And why do that? So that the LLM could “learn” about me from all the data that passes through my iPhone from me, and help the iPhone to be an even better aid. Smartphones, especially the iPhone, has long since passed the point of “good enough” to perform such tasks. The big question is how much power will that require, because there are always trade offs with portable computers. The more hardware you have on an iPhone, such as more memory, the more power the device will consume. The more processing the CPUs on the iPhone needs to perform, the more power the device will consume. As the saying goes, be careful what you ask for.

And there’s another reason for looking at this for local processing, and that’s in support of the intelligent home. My wife and I grow progressively older. We’re both in our early 70s. I would like a home with sensors and microphones and speakers to be able to assist our living here, rather than have to go live in a classical assisted living facility. And I do NOT want devices that require cloud connectivity and a monthly fee. Right now I have an intelligent [sic] doorbell which is absolute shit. I want far better, and I want local autonomy.

I haven’t even scratched the surface with any of this software. But it’s now approachable and I can run it locally on my MacBook, which I find absolutely amazing. I hope I can do more with it going forward.