No, dear AI, the two circles are not the same size. I scaled them to see if you actually measure them.
讕醒的AI:兩個圓圈係唔同大細。我特意放大咗其中一個來試下你有無真係量度佢地。

#CUHK Department of Economics is the only Department of Economics in Asia to have its own Generative AI service. Everything runs on premise and never leave campus.
#中大 經濟學系是全亞洲唯一有自家人工智能服務的經濟學系。所有運算均在校內進行。

Sometimes, being under pressure is helpful.
人係要迫下先得,AI都一樣。

美國現時面對最大的問題不是特朗普的保守政策,也不是馬克斯的小政府理念,因為兩者在美國民眾間確實有相當支持。問題出在推行的速度和制約之缺乏。除非美國變成專制,否則政黨總會輪替—在美國這樣的穩定兩黨制,只要通漲高或經濟差,輪替幾可肯定會發生。每4-8年來一次這樣的行政大幅度改變,社會成本太大了。

世上很多事情,小至捉棋大至治國,都並非只有單一正確方法,選擇了一個方向,去執行就可以了。 朝三暮四,舉棋不定才是問題。以能源為例,無論是民主黨想發展綠色能源還是共和黨想發展傳統能源,都不是幾年就能成事的。當一方執政,馬上就把另一方定下的政策開倒車,結果就是國家甚麼都發展不了。

人工智能嘅發展係以月計,甚至以日計。上堂前一日都可以有新野加。
AI capabilities advance by the month, sometimes even by the day. There are new things to add even just one day before the last class in the semester.

It’s nice that replaceble SSD has become a norm on newer laptops, but it’s beyond me why Intel Rapid Storage Technology driver is not included in Windows 11’s installation media.

Running Multiple RStudio Environments on Jupyter

Running RStudio within Jupyter has been possible for quite some time with jupyter-server-proxy. Doing so has its benefits, notably the ability to leverage JupyterHub’s systemdspawner to control the amount of resources users can use, a feature that is not available in the free version of RStudio.

It would have been nice to have the ability to choose between different R versions, which is another feature that is only available in the paid version of RStudio. Because jupyter-server-proxy relies on iterating through entry points in each proxy package, the only way to enable that right now is to modify jupyter-ression-proxy itself.

This is where our #PR133 comes in. By allowing setup_rserver to receive a custom name and a configuration file as arguments, all it takes to add additional R versions on Jupyter is to create a new skeleton package that imports jupyter-ression-proxy:setup_rserver and
has additional entry points for jupyter_serverproxy_servers.

Here is a working example we currently use on our HPC cluster.

GPU Server for HPC Cluster

How hard it is to build a server with four top-of-the-line GPU for an high-performance computing cluster? Harder than you might think.

When I started building the SCRP cluster back in 2020 summer, the GPU servers were provided by Asrock Rack. Everything except the GPUs were preassembled. This is the sensible thing to do in normal times.

Fast forward to 2021 summer, and times were not normal. The supply chain distribution and semiconductor shortage were in high gear. Pretty much every name-brand server manufacturer quoted us months-long lead time, if they were willing to deal with us at all. To get everything in for the new academic year, I constructed a series of servers with parts sourced from different parts of the world. It is actually not that hard to build servers—they are basically heavy-duty PC’s with all sorts of specialized parts—that is, unless you want a GPU server suitable for an HPC cluster.

So what is so special about GPU servers for HPC cluster?

  • Most server case have seven to eight PCI slots, but I needed at least nine slots (four dual-slot GPU + single slot Infiniband network card). There are maybe two manufacturers for such cases you can find from retail channel.
  • High-end GPU uses a lot of power. A single RTX3090 uses 350W, four means 1400W. Adding in CPU and other stuff and you are looking at 1800W minimum. A beefy power supply is definitely needed.
  • 1800W ATX power supply does exist, you say. The problem is, almost no servers use ATX power supply—they pretty much all use specialized CRPS power supply that gives you two power supplies in one small package. There are a lot of benefit to this, including redundancy and lower load per power supply. Guess how many 2000W CRPS power supply can you find from retail channel? ZERO. There is simply too much demand for these things from server manufacturers and too little from retail. I was fortunate enough to have one specially ordered on my behalf by a retail supplier, but it took a while to arrive.
  • Once you sorted out the parts, now comes assembly. Unless you have one of those highly-specialized Supermicro 11-slot motherboard—I am not sure if they even sell them in retail—your motherboard will have the width of seven PCIe slots. But you need nine! What do you do? Simple, you might think, all that is needed is a PCIe extension cable. Except you need one end of the cable to go under a GPU, and 99% of the cables you can buy will not be able to do that. I ended up having one custom-made. Yes, custom made. It’s the silver strip in the photo. Did I mention it is so fragile out of the factory, I ended up strengthening it with hot glue myself?

To conclude, if you think building your own PC is challenging, building a GPU server for an HPC cluster is probably three times the challenge. Another reason why you should not maintain your own infrastructure.

PCIe Gen 4 GPU does not play nice with Gen 3 extender board

Spent over an hour trying to figure out why some new GPUs were not working. The server is concern is a Asrock Rack 2U4G-EPYC-2T, which is a specialized server that allows four GPUs to be installed in a relatively small case. Google was not helpful because, understandably, this is a niche product produced only in small quantities.

What did not work:

  • -Attaching four Ampere GPUs (i.e. RTX 3000 series) in their intended positions in the case.

What work:

  • Attaching four Pascal GPUs (i.e. GTX 1000 series) in the intended positions.
  • Attaching only one Ampere GPU at the rear of the case.
  • Attaching four Ampere GPUs directly to the mainboard.

Took me a good hour to figure out that the issue was caused by the PCIe extender board. The three GPU positions at the front require the extender board, but the board was only for PCIe Gen 3. Normally, Gen 4 GPUs can negotiate with Gen 3 mainboards to communicate in PCIe Gen 3, but apparently they cannot do that through the extender board. Once the issue had been identified, the solution was actually very straightforward—manually setting the PCIe lanes to Gen 3 solves everything.

Yet another reason why maintaining your own computing infrastructure is not for the faint hearted.