Category: Uncategorized

Getting 1080p working via Intel i915 (Haswell 4600) -> HDMI -> VGA -> TV

For a project I recently started, I needed to build a semi-dedicated system. To make the cost a little more palatable, I planned to make it double as an HTPC (and maybe, in time, a Steam Box).

The TV I opted to use had its HDMI circuitry burn out late last year, so, as a workaround, I’ve been using an HDMI-to-VGA converter (HD2V04; does HDCP), which has been flawless with a PS3, but failed horribly when working with a device that actually wanted to trust EDID values, rather than just forcing its own fixed profiles, as a game console would.

Specifically, all EDID would enumerate was the VESA core-set and 1366×768, none of which looked good or even scaled properly on my display.

Adding definitions to xorg.conf repeatedly failed with “no mode of this name”, even when the mappings all appeared to check out. Using cvt and xrandr just resulted in unsupported video modes, making that seem like a dead-end, too. And i915.modeset=0 just made my DPI, which xorg.conf also failed to override, painfully wrong, to the point that there wasn’t enough detail to fonts to make out much of anything.

Ultimately, after lots of experimentation, I found a working configuration. The information below should work on most distributions, but have only been tested on Kubuntu 13.10.

First, you should try cvt 1920 1080 (using appropriate values), because, if it works, you’ll save yourself a bit of time. If not, then, like me, you can look for an appropriate modeline from the MythTV Modeline Database.

Once you have values, you’ll need to register them so xrandr will know what “1920×1080” means: xrandr --newmode "1920x1080" 148.50 1920 2008 2052 2200 1080 1084 1089 1125 +hsync +vsync

Then bind the new mode to the output you’re using: xrandr --addmode HDMI3 1920x1080 (you can find the name of your output, HDMI3 in my case, by running xrandr without arguments)

Finally, set the mode and refresh-rate to see if it works: xrandr --output HDMI3 --mode "1920x1080" --rate 60

If it doesn’t, you can set another mode, even if you can’t see your screen, by typing xrandr --output HDMI3 --mode "1024x768" --rate 60, which is almost certain to exist. (You can see all modes by typing xrandr without arguments)

Once you have a working value, you can make it permanent on-login by putting it in ~/.xprofile, which is executed as a series of instructions like any other script. I added a shebang and made the file executable for testing purposes, but this is probably not necessary.
xrandr –newmode "1920×1080" 148.50 1920 2008 2052 2200 1080 1084 1089 1125 +hsync +vsync
xrandr –addmode HDMI3 1920×1080
xrandr –output HDMI3 –mode "1920×1080" –rate 60

PING in pure Python

For work, I had need of running ping from a Python context in a memory-limited environment. Python was a given, parsing subprocess output is ugly, variable payload-sizes were required, and potentially many hosts would need to be pinged in parallel.

Seems like a great job for fping, but distributing external binaries is kinda tricky with this setup, so I did it in Python. (The other Python PING implementations, all of which seem to be derivatives of python-ping, either didn’t meet my basic needs, had more procedural namespace-bleed than I’d prefer to see, tried to do too much (requiring workaround logic to just do what’s actually needed), didn’t handle errors, or were GPL-licensed, which is unfortunately not something of which this process can make use)

The code, which is public domain, is available after the break.

Two video cards, three X sessions, five monitors: yes, it works

So I wanted a lot of screens for the new system I was building: lots of stuff to monitor, a reasonable amount of deskspace, and a desire to have at least one separate X session to handle fullscreen games without needing multiple profiles.

It wasn’t particularly difficult to set up, but documentation was sparse. To help anyone else who might be looking to do something similar, I’m providing my X config here; chances are, if you’re looking to do something similar, you’ll be able to figure out what needs to be tweaked.

Python threads that mysteriously appear to stop executing

I just solved a weird problem that, once understood, actually makes a lot of sense, but would probably be pretty hard to identify without a lot of guesswork.

My scenario, simplified:

  • One thread that runs in an infinite loop, polling a C-implemented function (from Cython), with a five-second timeout, to populate a queue
  • Any number of worker threads that block on the queue (timeout=7.5s) to get events to process

Now, this should seem like a fairly straightforward thing: a handful of threads, each capable of running in isolation, except for a common dependency on a threadsafe queue. The problem, however, is that the worker threads all eventually seemed to freeze, doing nothing while the infinite-looping thread ran fine.

Symptoms included being able to enumerate all threads, being able to have printouts saying that the threads were, indeed, alive, and what seemed to be freezing related to the logging module.

After commenting out every logging statement in the threads, the problem persisted, so they weren’t the issue. After that, I tried replacing queue.get() with a simple time.sleep(7.5) to see if the threads were still operating and the queue was at fault. The same behaviour occurred, with threads freezing when they slept. This implied that the problem was related to blocking.

It wasn’t until I started pinging someone uninvolved as a sounding board that the pattern started to make sense: the threads may not be reacquiring the GIL, so they might not ever be able to resume, even after they’re supposed to wake up. I tried waiting for ten minutes and, sure enough, one of the threads showed signs of life.

The problem was that my C polling function never released the GIL, so the entire timeout window would have been one big instruction to Python. Instead of taking advantage of threads for extended I/O delays, every other thread was blocking on their completion and the default 100-instruction context-switch was making the process take forever.


Simple to fix, but really, really hard to diagnose when just looking at the obvious symptoms. Hopefully, anyone who reads this will jump to a conclusion faster than I did, since it’s the sort of issue that can be really frustrating in what seems like a common design.

Feelin’… networky

As I get stuff set up to set about working on DHCPv6 support for staticDHCPd, I find that I now have five distinct subnets in a sub-1000sqft space. Yes, they all have a purpose (and will continue to exist long after 2.0 is out, ’cause I like my resources clearly defined by boundaries), yet I can’t help but feel that I’m doing something just a little unusual.