Prusa Mini: programmatically upload files via curl bash script

http api ethernet 3d printing prusa mini+

Thanks to the recent v4.4.1 BuddyBoard firmware the http file api works as desired: you can easily upload files to a usb stick attached to the printer. To perform bulk updates of your printer farm it’s much easier to write a simple bash script which deploys the print jobs:

#!/usr/bin/env bash

set -e

# printer settings
PRINTER_HOST="192.168.1.123"
API_KEY="ToEn8eDlR7kWIiUpVPJg"
FILENAME=myfile.gcode

# capture command stdout - http status code will be written to stdout
# progress bar on stderr
# http response (json) stored in /tmp/.upload-response
CURL_HTTP_STATUS=$(curl \
    --header "X-Api-Key: ${API_KEY}" \
    -F "file=@${FILENAME}" \
    -F "path=" \
    -X POST \
    -o /tmp/.upload-response \
    --write-out "%{http_code}" \
    http://${PRINTER_HOST}/api/files/local
)

# get result
CURL_EXITCODE=$?
CURL_RESPONSE=$(cat /tmp/.upload-response)

# success ?
if [ ${CURL_EXITCODE} -ne 0 ] || [ "${CURL_HTTP_STATUS}" -ne "201" ]; then
    echo "error: upload failed (${CURL_HTTP_STATUS})"
else
    echo "upload succeed"
fi

Uploading multiple files and checksums via http can be achieved with cURL and a few lines bash scripting. This might replace scp in most cases.

# array of files (and checksums) provided as cURL options
UPLOAD_FILES=()

# get all files within myUploadDir dir and calculate checksums
while read -r FILE
do
    # get sha256 checksum
    CHECKSUM=$(sha256sum ${FILE} | awk '{print $1}' )
    echo $FILE
    echo $CHECKSUM

    # extract filename
    FILENAME=$(basename ${FILE})

    # append file and checksum to curl upload args
    UPLOAD_FILES+=("-F" "file=@${FILE}") 
    UPLOAD_FILES+=("-F" "${FILENAME}=${CHECKSUM}")

# get all files within myUploadDir
done <<<$(find myUploadDir/* -type f | sort)

# upload
curl \
     -X PUT -H "Content-Type: multipart/form-data" \
     "${UPLOAD_FILES[@]}" \
     https://httpbin.org/put

Install Debian Stretch 10 on HPE Microserver GEN10 | Update

microserver, amd, opteron, x3216 x3418 x3421

Pure DEBIAN :)#

The HPE Microserver GEN10 is an impressive piece of rock-solid hardware. Of course… ILO is missing compared to GEN8 but for most use-cases thats not a real issue.

Debian buster runs nearly out-of-the-box using the netinstall image via USB Stick or network boot. The following tweaks are required to run it flawlessly:

No Graphics after running the installer#

The firmware package firmware-linux-nonfree is required for the AMD APU. Adding “nomodeset” to kernel command line may also work as mentioned on debian.org

IOMMU Error#

You may notice a iommu error on boot: the iommu is disabled by default – to enable it add the following parameters to your grub config:

File: /etc/default/grub

GRUB_CMDLINE_LINUX="amd_iommo=on iommu=pt"

Run update-grub2 to apply the changes and reboot the system – press F2 within the boot menu and to open the BIOS/UEFI menu. The iommu has to be enabled in Chipset -> GFX Configuration -> IOMMU.

In case you didn’t run any VMs on the maschine consider to keep iommu disabled – otherwise the SATA ports (Marvell 88SE9230) on the front become unusable!

CPUInfo#

Just FYI

 # cat /proc/cpuinfo 
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 21
model		: 96
model name	: AMD Opteron(tm) X3418 APU
stepping	: 1
microcode	: 0x600611a
cpu MHz		: 1300.000
cache size	: 1024 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 2
apicid		: 16
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good acc_power nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb bpext ptsc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 xsaveopt amd_ibpb arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic overflow_recov
bugs		: fxsave_leak sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips	: 3593.06
TLB size	: 1536 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro acc_power [13]

Power consumption#

  • IDLE: about 15Watt with a weak powerfactor of ~0.41 (sata boot ssd; no hdd)

Cryptsetup benchmark#

# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1       798003 iterations per second for 256-bit key
PBKDF2-sha256    1126290 iterations per second for 256-bit key
PBKDF2-sha512    1038194 iterations per second for 256-bit key
PBKDF2-ripemd160  529049 iterations per second for 256-bit key
PBKDF2-whirlpool  373424 iterations per second for 256-bit key
argon2i       4 iterations, 638239 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
argon2id      4 iterations, 639177 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
#     Algorithm |       Key |      Encryption |      Decryption
        aes-cbc        128b       532.4 MiB/s      1417.2 MiB/s
    serpent-cbc        128b        68.2 MiB/s       231.7 MiB/s
    twofish-cbc        128b       128.6 MiB/s       203.6 MiB/s
        aes-cbc        256b       429.0 MiB/s      1177.7 MiB/s
    serpent-cbc        256b        78.3 MiB/s       234.3 MiB/s
    twofish-cbc        256b       138.3 MiB/s       204.9 MiB/s
        aes-xts        256b       848.9 MiB/s       853.7 MiB/s
    serpent-xts        256b       246.4 MiB/s       227.9 MiB/s
    twofish-xts        256b       195.3 MiB/s       202.2 MiB/s
        aes-xts        512b       760.8 MiB/s       769.9 MiB/s
    serpent-xts        512b       247.6 MiB/s       227.3 MiB/s
    twofish-xts        512b       193.9 MiB/s       201.1 MiB/s

Gitea 1.5 on MariaDB 10.1

utf8mb4_general_ci; specified key was too long; max key length is 767 bytes

Error Messages#

In case you’ve tried to upgrade to Gitea 1.4 or 1.5 on Debian 9 with MariaDB 10.1 the following error messages will thrown to your log and the service won’t start:

[...itea/routers/init.go:60 GlobalInit()] [E] Failed to initialize ORM engine: migrate: 
do migrate: Sync2: Error 1071: Specified key was too long; max key length is 767 bytes

Issue#

The issue is caused by the newly introduced charset utf8mb4_general_ci which is set to default in Gitea >=1.4. This charset requires 4 bytes per character and the indexes on utf8mb4_general_ci fields (varchar 255) won’t fit into the InnoDB scheme.

Solution#

The only reliable solution is an upgrade to MariaDB 10.2 or 10.3. Just changing settings like innodb_large_prefix=1 or innodb_file_format=Barracuda as mentioned on several sites won’t have any effect to existing tables.

Workaround#

I’ve used a legacy version of Gitea (1.2.3) for a long time which was created initially as utf8_general_ci scheme. Therefore i’ve decided to alter the table + field charsets manually via phpmyadmin and set them to utf8_general_ci.

You have to run the upgrade procedure (start gitea executable) a several time because new tables are not created at once (repeat it 3..5 times).

Finally it works but i’m not sure if there will be any side effects in the future..

Netgear GS108Ev3 Firmware Upgrade failed

switch stocks in bootloader mode, timeout, linux, debian, ubuntu

Upgrading a Netgear switch can be very annoying…i’ve recently bought a second GS108Ev3 and wanted to upgrade the firmware initially but the switch stocks in bootloader mode (still web accessible on 192.168.0.239). By running the upgrade via Firefox or Chromium on Debian the firmware upload stops at ~7% with a timeout error. Same issue with tftp.

Solution#

Use a Windows Machine (Win 10) + Google Chrome Browser and run the firmware upgrade procedure via web interface on 192.168.0.239 – this will even work in case the Netgear ProSAFE Configuration utility throws a timeout error. VERY WEIRD!

Overall the (first) switch performs very well over the last few years and draws very low power – a great SOHO product with VLAN capabilities (PVID/Tagged/Untagged) but the firmware needs a makeover..

Grandstream VoIP over OpenVPN

asterisk, gxp1625, vpn, settings, config, nat

Grandstream VoIP telephones are very popular because of their high build quality compared to an excellent price. In some cases you want to use an encrypted communication channel between your device and the PBX (e.g. asterisk). The current grandstream firmware includes basic OpenVPN support (client mode, tun) which allows you to tunnel the whole SIP/RTP traffic over an encrypted channel. This is also the best solution to avoid any kind of NAT/routing issues because all devices are directly accessible within the virtual ip subnet.

OpenVPN Server Config#

Use the following (minimal) configuration as template. The important options are set to work with the current grandstream firmware (1.0.4.106). Certificate based authentication is preferred for security (login/password not needed)!

tls-server
dev tunX
topology subnet
server 172.16.1.0 255.255.255.0
port 10111
proto udp

# cert based auth
pkcs12 server.p12

# 1024 and 2048 bit dh params are supported
dh dh2048.pem
keepalive 10 120
script-security 2

# bh-cbc as well as aes-128-cbc are supported by the current firmware
cipher aes-256-cbc

# well sha1 is a bit weak but its set within grandstream firmware
auth sha1

# compression has to be enabled
comp-lzo

tun-mtu 1500
mtu-disc yes

# custom logging
verb 3

# retain TOS flags (VoIP)
passtos

# internal network (VOIP Server)
push "route 10.16.0.1 255.255.255.0"

Notes#

  • Don’t forget to alter your firewall rules. The new OpenVPN subnet needs to be accessible by your VoIP Server (e.g. asterisk) and vice versa
  • Add Quality-of-Service rules to your router which matches the OpenVPN port set above. The traffic should be marked with class EF (realtime, expected forwarding) to avoid package lost. Default VoIP rules will not match because of the encrypted channel!

Using ejs as template-engine within express.js default configuration can be very annoying – you have to pass a dedicated variable set to each response.render() call. But for a lot of tasks it is required to use some kind of global variables in your templates, e.g. page title, resources and much more.

The most reliable solution is a custom template renderer which invokes ejs in the way you want.

Custom Template Engine/Renderer Function#

const _ejs = require('ejs');

// example: global config
const _config = require('../config.json');

// custom ejs render function
module.exports = function render(filename, payload={}, cb){
    // some default page vars
    payload.page = payload.page || {};
    payload.page.slogan = payload.page.slogan || _config.slogan;
    payload.page.title = payload.page.title || _config.title;
    payload.page.brandname = payload.page.brandname || _config.name;

    // resources
    payload.resources = payload.resources || {};

    // render file
    // you can also pass some ejs lowlevel options
    _ejs.renderFile(filename, payload, {
        
    }, cb);
}

Usage#

const _express = require('express');
const _webapp = _express();
const _path = require('path');
const _tplengine = require('./my-template-engine');

// set the view engine to ejs
_webapp.set('views', _path.join(__dirname, '../views'));
_webapp.engine('ejs', _tplengine);
_webapp.set('view engine', 'ejs');

// your controller
_webapp.get('/', function(req, res){
   // render the view using additional variables
   res.render('myview', {
     x: 1,
     y: 2
   });
});

 

Use EnllighterJS with marked

markdown, gfm, javascript, nodejs

marked is one of the most popular markdown parsers written in javascript. It’s quite easy to integrate EnlighterJS within, just pass a custom highlight function as option.

Promised based highlighting#

File: markdown.js

const _marked = require('marked');
const _renderer = new _marked.Renderer();

// escape html specialchars
function escHtml(s){
    return s.replace(/&/g, '&amp;')
            .replace(/"/g, '&quot;')
            .replace(/</g, '&lt;')
            .replace(/>/g, '&gt;');
}

// EnlighterJS Codeblocks
_renderer.code = function(code, lang){
    return `<pre data-enlighter-language="${lang}">${escHtml(code)}</pre>`;
};

const _options = {
    // gfm style line breaks
    breaks: true,

    // custom renderer
    renderer: _renderer
};

// promise proxy
function render(content){
    return new Promise(function(resolve, reject){
        // async rendering
        _marked(content, _options, function(e, html){
            if (e){
                reject(e);
            }else{
                resolve(html);
            }
        });
    });
}

module.exports = {
    render: render
};

 

Usage#

const _markdown = require('markdown');

// fetch markdown based content
const rawCode = getMarkdownContent(..);

// render content
const html = await _markdown.render(rawCode);

 

Comparing the content of two directories binary-safe is a common used feature especially for data synchronization tasks. You can easily implement a simple compare algorithm by generating the sha256 checksums of each file – this is not a high-performance solution but even works on large files!

const _fs = require('fs-magic');

// compare directoy contents based on sha256 hash tables
async function compareDirectories(dir1, dir2){
    // fetch file lists
    const [files1, dirs1] = await _fs.scandir(dir1, true, true);
    const [files2, dirs2] = await _fs.scandir(dir2, true, true);

    // num files, directories equal ?
    if (files1.length != files2.length){
        throw new Error('The directories containing a different number of files ' + files1.length + '/' + files2.length);
    }
    if (dirs1.length != dirs2.length){
        throw new Error('The directories containing a different number of subdirectories ' + dirs1.length + '/' + dirs2.length);
    }

    // generate file checksums
    const hashes1 = await Promise.all(files1.map(f => _fs.sha256file(f)));
    const hashes2 = await Promise.all(files2.map(f => _fs.sha256file(f)));

    // convert arrays to objects filename=>hash
    const lookup = {};
    for (let i=0;i<hashes2.length;i++){
        // normalized filenames
        const f2 = files2[i].substr(dir2.length);
        
        // assign
        lookup[f2] = hashes2[i];
    }

    // compare dir1 to dir2
    for (let i=0;i<hashes1.length;i++){
        // normalized filenames
        const f1 = files1[i].substr(dir1.length);

        // exists ?
        if (!lookup[f1]){
            throw new Error('File <' + files1[i] + '> does not exist in <' + dir2 + '>');
        }

        // hash valid ?
        if (lookup[f1] !== hashes1[i]){
            throw new Error('File Checksum of <' + files1[i] + '> does not match <' + files2[i] + '>');
        }
    }

    return true;
}

await compareDirectories('/tmp/data0', '/tmp/data1');

 

TravisCI: Use custom Node.js version within container based builds

nodejs binary, custom version, second language

Sometime you may need a special version of Node.js or a recent version within a foreign build environment. But in the modern container-based infrastructure it is not possible to use apt to install custom packets which are not whitelisted. As an workaround, you can download pre-build binaries via wget into your build directory and add the bin/ dir to your PATH. This allows you to use any pre-build third party software without installation.

Example: PERL with javascript testcases#

os: linux

language: perl

perl:
  - "5.24"
  - "5.14"

# skip perl (cpanm) dependency management
# install nodejs into home folder
install: 
  # fetch latest nodejs archive
  - wget https://nodejs.org/dist/v8.8.1/node-v8.8.1-linux-x64.tar.gz -O /tmp/nodejs.tgz
  # unzip
  - tar -xzf /tmp/nodejs.tgz
  # add nodejs binaries to path - this has to be done here!
  - export PATH=$PWD/node-v8.8.1-linux-x64/bin:$PATH
  # show node version
  - node -v
  - npm -v
  # install node dependencies
  - npm install

script:
  # syntax check
  - perl -Mstrict -Mdiagnostics -cw rsnapshot
  # run javascript based tests
  - npm test