Quick Blog Links

about   root @       
 
  MacOS Applications   [x-code + objective-c]  
 
  SSHPass Automation Program   [python / app]  
 
  VPN-Like Tunneled Interface & Traffic   [python  /  networking]  
 
  DHCP/ARP Relay-Bridge ~ Proxying   [c  /  networking]  
 
  Nginx Mod Hook for Stream Proxy Server   [c / networking]  
 
  Linux Kernel IP-DF Flag Header Rewrite   [c / kernel]  
 
written   Secure LAN Communication   [College Thesis]  
 
  College Project – Teaching Hacking!   [Course Paper]  
 
  ARM Assembly – A Basic Introduction…   [Blog Post]  
 
configs   WiFi Bridge ~ Network Diagram   Firewalling ~ eb|iptables  
 
  Cisco and OpenWRT   Ubiquiti and OpenWRT  
 
gear   Home Stuff | Mac Mini | WiFi Networks  
 
 
# Note: github.com/fossjon <- I lost access due to missing 2fa, so now I'm using -> github.com/stoops
for p in `seq 1 3` ; do
  curl -sL "https://fossjon.com/feed/?paged=$p" | grep -Ei '<(title|link)>' \
    | sed -e 's@<title@~<title@g' | tr ' \t\r\n' ' ' | tr -s ' ' | tr '~' '\n' \
    | sed -e 's@^.*<title>\(.*\)</title>.*<link>\(.*\)</link>.*$@<a href="\2" style="text-decoration:none;font-family:monospace;">\1</a><br/>@' \
    | grep -i '/fossjon.com/'
done > blog.html
   
   pages  
   1   2   3   4   5   |  6   7   8   9   10   
 11   12   13   14   15   |  16   17   18   19   20 

Thank you Python for years of service and reliability so far (and the ctypes module!)

So I’ve run into this issue in the past but I finally started looking into why Python is soo slow at running basic math operations in a long loop, for example, simple stream cipher operations. You’ll see lots of suggestions to use numpy instead, however, I didn’t find this to be the most helpful. Since I like writing/reading C, I remembered that Python has a built-in ctypes module which is very helpful and useful if you are in need of specialized and optimized code paths. You can pretty easily pass in integer and byte array pointers with little complexity!

For example:

~

New Samsung S90C OLED – Green Screen Of Death

So last year, for my birthday, I purchased a new Sony PS5 to update my gaming console after soo many years and it immediately failed on me by always shutting down during game play. This year, for my birthday, I decided to try and update my 8 year old TV with a new Samsung OLED for the first time and as my luck would have it, I was presented with the “Green Screen Of Death”. The TV only just arrived a few days ago and is now dead so I have to go through the process of trying to contact a certified Samsung repair person to see if it can even be fixed. I can’t tell if its just my bad luck going on lately or if quality control at these companies has gone down hill but it’s starting to get harder and harder to find good quality alternatives! 😦

It Is Getting Harder And Harder To Find Good Quality Products And Services

Rant: Back in the pre-2000’s era, if you wanted to find out which product or service was of a given quality (low, mid, high) – you would have to reach out to the experts in society to help guide your research and information. Then, when technology came about, I switched my reliance onto two big giants called Google and Amazon. On these platforms, I was provided with the proper tools and mechanisms to search and research which products and services were of a certain specific quality-level and price-point before acting upon it. This process worked for me for a number of years, however, I am now running into what many have described regarding irrelevant Google search results (lost knowledge) and cheapening Amazon product listings (lost quality). I am now finding that in order to get to the exact knowledge on product quality in life, I either have to search Google for Reddit posts/comments and/or call an expert in real life to try and extract that experience, knowledge, and expertise that a person has built up over the years (assuming they haven’t retired yet). Things seem to be getting harder instead of easier as life and time go on — something I didn’t expect to happen when I was a kid growing up with all this once amazing technology!

Edit: Cory Doctorow has been covering this issue in a multi-part series under a new term ~ https://doctorow.medium.com/googles-enshittification-memos-2d6d57306072

Pros and Cons of the Controversial Apple Studio Display VESA Edition

Pros:

  • Flat Industrial Design (Metal & Glass)
  • Clean Even Rounded Simple Bezels
  • 5K+ Display Resolution & 200+ Pixels Per Inch
  • Single Cable Connection (Hub + Power)
  • VESA Mount Monitor Arm Movement
  • Brighter Brightness Helps Reduce Reflections
  • Pretty Good Quality Speakers

Mids:

  • Microphone/Camera quality is alright
  • Non Removable Power Cord == bad
  • No External Power Brick == good

Cons:

  • No Higher Refresh Rate
  • No Local Dimming Zones
  • No physical buttons or controls
  • No HDMI input port for an Apple TV
  • No affordable larger-sized 32-inch version

Overall, this is an amazing monitor for creative text-based productivity-activities such as emailing, browsing, coding, terminals, etc. as the font is crisp and clear & sharp and smooth to look at. Most PC monitors have all sorts of different styles and technologies and designs built into them which are primarily geared towards PC gaming performance instead. This monitor is good at what it does and is nice to look at and lastly inspires me to use it — I just wish Apple offered a larger 32″ option that was more affordable!

Edit: As always, I appreciate Apple pushing certain industry standards forward such as Pixels Per Inch and Performance Per Watt as these metrics make a huge difference in day to day usage in terms of product quality.

Prediction: I think that when Apple is able to ship its first Macs with the updated Thunderbolt 5 port and bandwidth, we’ll then see the real versions of the monitors that Apple truly wants to make (for example, Mini-LED 10bit-color 120Hz-refresh — 27inch-5K / 32″-6K / 36″-7K).


The Back Of This Looks Better Than The Front Of Others 🙂
~ Steve Jobs

Updates to my mod-nginx-proxy replacement script in Python

So in my quest to replace my modified-nginx network-wide proxy service with a Python script, I was interested to see what kind of performance I could achieve without having to write the whole thing in C this time. So far, the networking performance has been pretty good after I was able to iron out some connection tracking state issues. I then started looking into making it multi-threaded to help manage a large number of connections in parallel and I came across potential limitations related to the Global Interpreter Lock in Python. This then directed me to the multi-processing capabilities in Python and the ability to lock and share memory variables and values which reminded me of the old-school c-type variable definitions that I enjoy working with. Anyway, I tried to implement my first multi-process shared-variable class which should allow for locking data reads/writes between processes or threads. Because of this functionality and capability in Python, I was able to transform my proxy script to now allow for various processing modes (loop, thread, process) and buffer types (shared, piped).

Ending This Year With A New Proxy Python Server Script

So it’s been a busy year for me personally as I have been trying my best to get setup in a new location and start some new routines with more focus placed on my health (diet and sleep and exercise). I’ve been battling some allergies including seasonal ones but if I get them under control in time I do feel much better (almost 95% of the way I used feel back in the day). I haven’t been able to post much here lately because of work and life but I plan to keep the blog alive if any new technical projects or anything interesting comes up.

One on-going project at home was my network-wide VPN tunnelling experiment via the Mac Mini which has been interesting. OpenVPN worked perfectly speed wise but I was having some performance issues with WireGuard and I believe it was due to the lower MTU required along with the fact that WG does not have similar functionality of being able to pre-fragment larger-sized packets coming in from the network clients being routed inside of the tunnel. I had a workaround with using a custom-modified version of nginx to proxy the client-side connection and start a newly-sized one on behalf of the client thus allowing the MTU to be respected directly from the server-side. It seemed to work better performance wise, however, I believe I was running into timing based connection errors potentially with expired states and sessions and processes.

So to end the year, I am trying a new project that I wrote from scratch in Python. It performs the same core functionality as what I modded into nginx for both UDP and TCP connection types. It’s multi-process via the load-balancing workaround/trick by using multiple localhost IP addresses and the core proxying recvs/sends is multi-threaded (you can set a single-threaded loop if you prefer). I will see how long this one lasts me going into the new year!

https://github.com/stoops/pyproxy

Things I Have Stopped Since The Start Of The Year

[short blog post]

I have made some changes that I’ve been living by in 2023 which help as I’ve gotten older now and I wanted to keep a rolling updated list for my future documentation purposes. These are some products and companies that I’ve stopped consuming or purchasing in life.

Dietary:

  • High Fructose Corn Syrup
  • Dairy
  • Caffeine

Products:

Decaf Only 🙂

Experiencing Connection Time-Out Troubles With Multi-Process NGINX

So I was trying to perform some basic redirect-proxying with load-balancing by having multiple nginx processes running for a given connection protocol and I was experiencing connection timeout troubles from time to time. Even when I tried playing with the different combinations of the proxy and connect timeout values (short-short, short-long, long-short, long-long, etc) it would still happen over time. I decided to switch up my solution here by having multiple single process nginx instances running on a different local host IP address instead to avoid port conflicts. My new setup is as follows:

Multi-Localhost Addresses

ifconfig lo0 127.0.0.2 netmask 255.0.0.0 alias
ifconfig lo0 127.0.0.3 netmask 255.0.0.0 alias
ifconfig lo0 127.0.0.4 netmask 255.0.0.0 alias

PF Load-balancing

# vpn
rdr on en0 inet proto udp from any to any port = 500 -> { 127.0.0.1, 127.0.0.2, 127.0.0.3, 127.0.0.4 } port 3121 round-robin
rdr on en0 inet proto udp from any to any port = 4500 -> { 127.0.0.1, 127.0.0.2, 127.0.0.3, 127.0.0.4 } port 3121 round-robin

# udp
rdr on en0 inet proto udp from any to any port 1:5352 -> { 127.0.0.1, 127.0.0.2, 127.0.0.3, 127.0.0.4 } port 3125 round-robin
rdr on en0 inet proto udp from any to any port 5354:65535 -> { 127.0.0.1, 127.0.0.2, 127.0.0.3, 127.0.0.4 } port 3125 round-robin

# tcp
rdr on en0 inet proto tcp from any to any port 1:21 -> { 127.0.0.1, 127.0.0.2, 127.0.0.3, 127.0.0.4 } port 3129 round-robin
rdr on en0 inet proto tcp from any to any port 23:65535 -> { 127.0.0.1, 127.0.0.2, 127.0.0.3, 127.0.0.4 } port 3129 round-robin

Dynamic Bash Shell Prompt (dash)

So it’s been a busy year this year and I haven’t gotten a chance to post much on the good old blog here but I wanted create something that provided me with more dynamically updating information in my bash shell. It’s just a demo and work in progress but it launches 2 bash shell processes (one for the updated PS1 information and the other for the interactive login). It tries to detect your default PS1 format based on a simple regex which can be changed and modified based on your formatting and needs. You’ll also need to write a shell script to place inside the PS1 prompt which gathers the dynamic info you are interested in for your own shell. This can allow you to automatically update and display all sorts of information without needing to type anything out! For example: hostname, LAN/WAN IP, WiFi channel/speed, date/time, weather information, etc. Small demo video below!

Edit: I found another wordpress blog that has posted about a “more-official/proper” way to run an interactive shell with a pty fork/spawn that may be worth incorporating here: https://sqizit.bartletts.id.au/2011/02/14/pseudo-terminals-in-python/

Proof-Of-Concept Code (DASH)

Update: So I’ve tried to incorporate the code from the blog site above into a newer version of this hacky-dynamic-interactive-bash-shell python script, now placed in a dedicated repository: https://github.com/stoops/dash/

Keyboard Gear Update – The Five Tactiles (+1 Bonus)

Matias Mini    
Switch: Alps White  
Plate: Plastic  
Mod: Stock  
Sound: Clicky  
Ducky Mini  
Switch: Cherry MX Brown
Plate: Aluminum
Mod: Wooden Case
Mod: Right-Shift Delete
Mod: Arrow-Keys Remap
Sound: Marbley
Drop Alt  
Switch: Halo True + Holy Panda
Plate: Aluminum
Mod: Stab Tape + Shim
Sound: Clacky
KBD67 Lite  
Switch: Halo True + Holy Panda
Plate: Plastic
Mod: Plate Silicone
Mod: Stab Holee-Fill
Mod: Case Tape + Small Weight
Sound: Chalky
Mode Envoy  
Switch: Halo True + Holy Panda
Plate: Aluminum
Mod: Plate Foam
Mod: Case Tape
Mod: Fastener-Screws + O-Rings
Mod: Solid Blocks + Silicone Feet
Sound: Creamy & Poppy
Mode Sonnet  
Switch: Boba U4T
Plate: Aluminum
Mod: Plate Foam
Mod: Case Tape
Mod: Top-Mounts + O-Rings
Sound: Creamy & Chalky

Pseudo-Bridging Layer-2 ARP-Sync

So back in the day I was trying to bridge two layer-2 networks over a wireless relay and I was using a TP-Link Archer C7V5 for the two routers. I initially tried out relayd however I found that it wasn’t doing a good job at managing the ARP/route table entries as they were getting out of sync and not being updated and refreshed properly. I tried modding the framework but eventually gave up and wrote my own solution in C because these router units had very limited RAM and CPU available. The original framework was called ARP-Relay-Bridge (arprb) and it did a lot of work to manage the ARP table, PING the clients, listen for ARP Requests, send Proxied Replies, manage the routing table and I also even had DHCP relay capability as well.

Now, I have replaced all my TP-Link units with the Linksys E8450 which have a bit more power to them so I tried to re-write a simpler solution to this layer-2 bridging problem in Python. This solution simply just sends out the current list of active ARP table entries to another router and when it receives the entries of another router, it adds an entry to the route table to properly redirect the traffic. This simple solution solves the LAN-to-WAN issue and with the help of OpenWRTs Linux based proxy_arp functionality, the LAN-to-LAN issue can also be solved as an extension since the routers know where to route the packets. Basic instructions in the readme!

Source code: https://github.com/stoops/arpsync

Socks-like Proxy + VPN-like Tunnel [2-in-1 Experiment]

For my home network setup, I have a Mac Mini that is connecting to a Linux server with OpenVPN (backup) & WireGuard (primary) to tunnel traffic for the entire network. Due to some lower MTU issues with WG, I have also setup nginx to act as a socks-like transparent proxy which handles the connections on behalf of the client (so that the server side can keep the LAN MTU matching with the client side as well as forcing a defragmentation of the packets before they enter the VPN tunnel). It then opens a matching proxy connection to the requested destination with a lower TUN MTU & TCP MSS set so the packets can be properly segmented and transmitted. It’s been working great so far but I was wondering about the performance and speed of this solution (I had only been redirecting TCP port 443 to nginx and little did I know that the speedtest.net service uses port 8080 behind the scenes so I had to adjust my firewall rules to be more generic and forward almost all TCP & UDP ports now). After improving all of that, the speed tests were all fast and quick along with a little more sysctl tuning!

Anyway, I decided to write a C-based solution (backup) that could theoretically handle both of these services and functions at the same time. It doesn’t have solid crypto as of yet since it was mostly an experiment so far but you could easily swap in a real stream cipher (or possibly block) if you want to. It is multi-process and multi-threaded app with some basic operating instructions in the readme.

It’s called proxytun – no exciting screenshots or anything – just code, like the old days! 🙂

Source code: https://github.com/stoops/proxytun

OpenWRT Switching To Distributed Switch Architecture

Over the years I got used to the simple Switch configuration tab in the OpenWRT web interface to be able to configure which ports are tagged/untagged on which VLANs which I could then bridge later to specific interfaces. This page has since been removed for a more flexible/powerful subsystem that OpenWRT is implementing called DSA. I’m still new to this, however, I had to recently configure a similar interface-to-vlan layout in this new section. It is now listed under the interface bridge configuration page but it has the similar table layout with the same set of options. For example, two LAN switch ports on separated VLAN access ports (untagged) and two LAN switch ports as VLAN trunk ports (tagged) both of which are carrying the other two separated VLAN network traffic.

The Best Looking iPhone 15 Pro Renders

So I’ve been tracking the iPhone 15 rumors and I think the best looking renders so far are the ones that keep the antenna bands as a straightened-flattened ribbon wrapping around the phone with the back glass being curved itself at the edges. It will then look like the two different pieces have come together to make up the overall phone. This, along with the flush camera lenses, which dates back to the beautiful iPhone 4S design, would help make it look a lot more stylish as well. I will greatly miss the silencing physical dip switch if Apple decides to remove it but let’s see what they end up doing! 🙂