a video on the cyber security implications of AI

checkpoint basically says whoever is developing public internet facing software needs AI to find exploits before others find them (any program that exchanges messages)
firewall virusscanner endpointprotection good but not enough, there shall be no mail client running in a network, that processes sensitive information

AI privacy and politics problem: orwell 1984 calling?

so offline solar powered (!) AI is definately a good idea. but is it possible with limited resources? maybe there should be something like “community run AI” where a group of users pools together to run the hardware that is required to run more powerful models offline, without the privacy problems.

so in sense of privacy local AI makes sense

what is openclaw?

https://www.youtube.com/watch?v=tFCgmeOWlA8 <- it sounds interesting if it can done software testing properly

how to install airllm

# tested on debian 13
su - root
apt update
apt install pip
# but more correct, package is called
apt -y install python3-pip
apt show python3-pip; # check what was installed
apt -y install python3-venv
python3 -m venv airllm; # create environment called airllm (like a separate workspace)
source airllm/bin/activate; # activate
pip install transformers==4.48.0
pip install optimum==1.17.0; # (thx) install dependencies in specific version
pip install airllm; # install pip packages into this environment
# it should then continue to download and install a lot of dependencies
# like cuda and torch so grab multiple coffees...

now log in to https://huggingface.co/ and get a access read token

size of models: 70B means the model almost needs 70 GB of RAM (less because of compression and quantization?)

so what models are possibly usable via airllm? not all 🙁

only models that exist in slices (smaller junks of a big file)

WARNING! MASSIVE CHUNKS OF DATA!

consists of 15x 10Gbyte “chunks” called “shards” garage-bAInd/Platypus2-70B-instruct

here the massive ressource use of AI and the low bandwidth of Germany hits hart

create a vim start.py or just type pyhton3 to get into interactive console and replace yellow_text_with_users_huggingface_read_token

source airllm/bin/activate; # activate the environment

vim ./start.py; # and fill it with
from airllm import AutoModel
MAX_LENGTH = 128
from huggingface_hub.hf_api import HfFolder
HfFolder.save_token('38djl8edjfo3dufio380df')
# takes forever to download depending on user's internet badwidth 10Gbytes
model = AutoModel.from_pretrained("garage-bAInd/Platypus2-70B-instruct")
# model = AutoModel.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct") # failed with some error

# or use model's local path...
# model = AutoModel.from_pretrained("/home/ubuntu/.cache/huggingface/hub/models--garage-bAInd--Platypus2-70B-instruct/snapshots/b585e74bcaae02e52665d9ac6d23f4d0dbc81a0f")

input_text = [
        'What is the meaning of life?',
        #'I like',
    ]

input_tokens = model.tokenizer(input_text,
    return_tensors="pt", 
    return_attention_mask=False, 
    truncation=True, 
    max_length=MAX_LENGTH, 
    padding=False)
           
generation_output = model.generate(
    input_tokens['input_ids'].cuda(), 
    max_new_tokens=20,
    use_cache=True,
    return_dict_in_generate=True)

output = model.tokenizer.decode(generation_output.sequences[0])

print(output)

save and exit vim (ESC :wq) and start

chmod +x *.py
python3 ./start.py

then wait a long time depending on bandwidth it will download the model (10GBytes!)

… and then get some obscure python because it could not complete the 10GBytes download and there is no resume function because hugging face said something about request limit reached “hurray”.

src: https://github.com/lyogavin/airllm?tab=readme-ov-file#quickstart

how to install openclaw

“between technology and art” the developer said

art that makes AI providers some money…

su - root
curl -fsSL https://openclaw.ai/install.sh | bash

▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
██░▄▄▄░██░▄▄░██░▄▄▄██░▀██░██░▄▄▀██░████░▄▄▀██░███░██
██░███░██░▀▀░██░▄▄▄██░█░█░██░█████░████░▀▀░██░█░█░██
██░▀▀▀░██░█████░▀▀▀██░██▄░██░▀▀▄██░▀▀░█░██░██▄▀▄▀▄██
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
                  🦞 OPENCLAW 🦞                    
 
┌  OpenClaw onboarding
│
◇  Security ─────────────────────────────────────────────────────────────────────────────────╮
│                                                                                            │
│  Security warning — please read.                                                           │
│                                                                                            │
│  OpenClaw is a hobby project and still in beta. Expect sharp edges.                        │
│  By default, OpenClaw is a personal agent: one trusted operator boundary.                  │
│  This bot can read files and run actions if tools are enabled.                             │
│  A bad prompt can trick it into doing unsafe things.                                       │
│                                                                                            │
│  OpenClaw is not a hostile multi-tenant boundary by default.                               │
│  If multiple users can message one tool-enabled agent, they share that delegated tool      │
│  authority.                                                                                │
│                                                                                            │
│  If you’re not comfortable with security hardening and access control, don’t run           │
│  OpenClaw.                                                                                 │
│  Ask someone experienced to help before enabling tools or exposing it to the internet.     │
│                                                                                            │
│  Recommended baseline:                                                                     │
│  - Pairing/allowlists + mention gating.                                                    │
│  - Multi-user/shared inbox: split trust boundaries (separate gateway/credentials, ideally  │
│    separate OS users/hosts).                                                               │
│  - Sandbox + least-privilege tools.                                                        │
│  - Shared inboxes: isolate DM sessions (`session.dmScope: per-channel-peer`) and keep      │
│    tool access minimal.                                                                    │
│  - Keep secrets out of the agent’s reachable filesystem.                                   │
│  - Use the strongest available model for any bot with tools or untrusted inboxes.          │
│                                                                                            │
│  Run regularly:                                                                            │
│  openclaw security audit --deep                                                            │
│  openclaw security audit --fix                                                             │
│                                                                                            │
│  Must read: https://docs.openclaw.ai/gateway/security                                      │
│                                                                                            │
├────────────────────────────────────────────────────────────────────────────────────────────╯

openclaw onboard

well it seems in 2026-03 claude api is not free anymore 🙁

and here the story ends…. because as suspected the above Ct 3003 example can not be done without giving antrophic (claude AI provider) money… money the tokens that is supposed to spur innovation and jobs has lately not worked very well to foster peace and cooperation.

PicoClaw.io same same but Chinese?

https://picoclaw.io/

seems to be a Chinese clone of https://openclaw.ai/

Beeing hyped all over social media as AI that runs on a arduion (?) so what exactly is picoclaw.io? (Warning! there are MANY imposter sites! more or less legitimate! ALWAYS WARNING: IT’S POSSIBLE PICOCLAW (MADE IN CHINA) IS SIMPLY RIDING THE OPENCLAW HYPE IN ORDER TO…. DO MALICOUS THINGS (ALWAYS test out new software in side a virtual machine!!!!)

how to install on GNU Linux Debian 13 (VM!)

# as non root
wget https://github.com/sipeed/picoclaw/releases/latest/download/picoclaw_x86_64.deb
su - root; # become root
dpkg -i picoclaw_x86_64.deb
# Ctrl+D log off root

so what do we get for that?

not really sure. it seems to be “only” an GUI for communicating with hosted as a service AI

# as non root user run
picoclaw onboard


picoclaw agent -m "Hello!"

Error: error creating provider: model "" not found in model_list: model "" not found in model_list or providers
Usage:
picoclaw agent [flags]

Flags:
-d, --debug Enable debug logging
-h, --help help for agent
-m, --message string Send a single message (non-interactive mode)
--model string Model to use
-s, --session string Session key (default "cli:default")

ps: haiku vs opus:

Q: what’s the difference between haiku 4.5 and opus?

A: “I can provide some general context about Claude models, though I should note that my training data has a cutoff and I don’t have the most current information about all model versions as of March 2026.

Historically, Claude models have been released in different sizes and capability tiers:

Haiku (the smaller model) is designed for speed and efficiency. It’s faster and more cost-effective, making it ideal for tasks that don’t require extensive reasoning or analysis. It handles straightforward questions, simple writing tasks, coding help, and general information well.

Opus (the larger model) is designed for complex reasoning and advanced capabilities. It excels at sophisticated analysis, detailed problem-solving, nuanced writing, complex coding tasks, and situations requiring deep understanding or creative thinking.

The trade-off is typically speed versus capability—Haiku is quicker but less powerful, while Opus is more powerful but slower and more resource-intensive.

Since you’re interacting with me (Haiku), I’m optimized for responsiveness and efficiency. For tasks requiring deeper analysis or more sophisticated reasoning, Opus would be the better choice, though it would take longer to respond.

If you’re curious about the specific technical differences or performance metrics between these versions as they exist in 2026, I could search for more current information. Would that be helpful?”

currently can be used for free here https://duck.ai/

there is also “sonnet” which sit’s in between , to be honest, there needs to be a naming of models that allows to identify them as “light” “medium” “heavy” (use of resources).

(WARNING! DUCKDUCKGO is possibly silently cooperating with Microsoft and while it also can be used anonymously with tor browser bundle, it will most likely forward all prompts to M$)

liked this article?

  • only together we can create a truly free world
  • plz support dwaves to keep it up & running!
  • (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
  • really really hate advertisement
  • contribute: whenever a solution was found, blog about it for others to find!
  • talk about, recommend & link to this blog and articles
  • thanks to all who contribute!
admin