Category Archives: Tech Zone

人工智能嘅發展係以月計,甚至以日計。上堂前一日都可以有新野加。
AI capabilities advance by the month, sometimes even by the day. There are new things to add even just one day before the last class in the semester.

It’s nice that replaceble SSD has become a norm on newer laptops, but it’s beyond me why Intel Rapid Storage Technology driver is not included in Windows 11’s installation media.

Running Multiple RStudio Environments on Jupyter

Running RStudio within Jupyter has been possible for quite some time with jupyter-server-proxy. Doing so has its benefits, notably the ability to leverage JupyterHub’s systemdspawner to control the amount of resources users can use, a feature that is not available in the free version of RStudio.

It would have been nice to have the ability to choose between different R versions, which is another feature that is only available in the paid version of RStudio. Because jupyter-server-proxy relies on iterating through entry points in each proxy package, the only way to enable that right now is to modify jupyter-ression-proxy itself.

This is where our #PR133 comes in. By allowing setup_rserver to receive a custom name and a configuration file as arguments, all it takes to add additional R versions on Jupyter is to create a new skeleton package that imports jupyter-ression-proxy:setup_rserver and
has additional entry points for jupyter_serverproxy_servers.

Here is a working example we currently use on our HPC cluster.

GPU Server for HPC Cluster

How hard it is to build a server with four top-of-the-line GPU for an high-performance computing cluster? Harder than you might think.

When I started building the SCRP cluster back in 2020 summer, the GPU servers were provided by Asrock Rack. Everything except the GPUs were preassembled. This is the sensible thing to do in normal times.

Fast forward to 2021 summer, and times were not normal. The supply chain distribution and semiconductor shortage were in high gear. Pretty much every name-brand server manufacturer quoted us months-long lead time, if they were willing to deal with us at all. To get everything in for the new academic year, I constructed a series of servers with parts sourced from different parts of the world. It is actually not that hard to build servers—they are basically heavy-duty PC’s with all sorts of specialized parts—that is, unless you want a GPU server suitable for an HPC cluster.

So what is so special about GPU servers for HPC cluster?

  • Most server case have seven to eight PCI slots, but I needed at least nine slots (four dual-slot GPU + single slot Infiniband network card). There are maybe two manufacturers for such cases you can find from retail channel.
  • High-end GPU uses a lot of power. A single RTX3090 uses 350W, four means 1400W. Adding in CPU and other stuff and you are looking at 1800W minimum. A beefy power supply is definitely needed.
  • 1800W ATX power supply does exist, you say. The problem is, almost no servers use ATX power supply—they pretty much all use specialized CRPS power supply that gives you two power supplies in one small package. There are a lot of benefit to this, including redundancy and lower load per power supply. Guess how many 2000W CRPS power supply can you find from retail channel? ZERO. There is simply too much demand for these things from server manufacturers and too little from retail. I was fortunate enough to have one specially ordered on my behalf by a retail supplier, but it took a while to arrive.
  • Once you sorted out the parts, now comes assembly. Unless you have one of those highly-specialized Supermicro 11-slot motherboard—I am not sure if they even sell them in retail—your motherboard will have the width of seven PCIe slots. But you need nine! What do you do? Simple, you might think, all that is needed is a PCIe extension cable. Except you need one end of the cable to go under a GPU, and 99% of the cables you can buy will not be able to do that. I ended up having one custom-made. Yes, custom made. It’s the silver strip in the photo. Did I mention it is so fragile out of the factory, I ended up strengthening it with hot glue myself?

To conclude, if you think building your own PC is challenging, building a GPU server for an HPC cluster is probably three times the challenge. Another reason why you should not maintain your own infrastructure.

PCIe Gen 4 GPU does not play nice with Gen 3 extender board

Spent over an hour trying to figure out why some new GPUs were not working. The server is concern is a Asrock Rack 2U4G-EPYC-2T, which is a specialized server that allows four GPUs to be installed in a relatively small case. Google was not helpful because, understandably, this is a niche product produced only in small quantities.

What did not work:

  • -Attaching four Ampere GPUs (i.e. RTX 3000 series) in their intended positions in the case.

What work:

  • Attaching four Pascal GPUs (i.e. GTX 1000 series) in the intended positions.
  • Attaching only one Ampere GPU at the rear of the case.
  • Attaching four Ampere GPUs directly to the mainboard.

Took me a good hour to figure out that the issue was caused by the PCIe extender board. The three GPU positions at the front require the extender board, but the board was only for PCIe Gen 3. Normally, Gen 4 GPUs can negotiate with Gen 3 mainboards to communicate in PCIe Gen 3, but apparently they cannot do that through the extender board. Once the issue had been identified, the solution was actually very straightforward—manually setting the PCIe lanes to Gen 3 solves everything.

Yet another reason why maintaining your own computing infrastructure is not for the faint hearted.

“B-F-G-P-U”

We will be running tests and benchmarks here at CUHK SCRP over the next few days. Users should be able to access the new RTX 3090 through Slurm after the scheduled maintenance next week.