In an effort to reduce possibility of electrical resistance, we replaced the standard 6ft (182cm) 18awg cable and 1ft (30cm) c14 adapter with a straight-through 2ft (60cm) 14awg c13-c14 cable. This was possible, since the PSUs are very close to the PDUs.
How cold is an R9 290 with 3 15W delta fans? About 15C colder than the average.
ethos1 was running kind of hot, and one of our breakers tripped. Technically, we are supposed to run at 24A but our PDUS were pushing 27A. So, we undervolted the GPUs from 1175 to 1125, which reduced the load by about 25W per GPU. The PDUs now run at 23-24A and things are lot cooler. This also gave us more power/room for 6 more rigs.
The only issue is that some GPUs need to be underclocked. We are monitoring which ones go down and underclocking them as we go. We are also going to investigate possibility of installing a passive intake vent at the other side of the warehouse.
Finally, all GPUS are overclocked to 1050/1250/50 core/mem/powertune. Some could not take that clock; that issue will be solved with some tuning. To achieve 6000mh, we will run a script that finds the most optimal overclock on a per-gpu basis.
We found a GPU that had no fan control (always ran at 1000 rpm). We attached a 15W 80mm delta fan to it, and now it runs pretty cold.
We also had a low power AM3 170U CPU, and wanted to find out the lowest possible CPU temperature we could acheive. Therefore, we pasted it with a copper heatsink and Prolimatech PK2 thermal compound. It is now the coldest CPU in the farm (rig f2).
We are using Stilt's 1.175 bios mod for better mining performance at lower temperature and voltage. For this procedure, we are using a bootable DOS image that automatically flashes all GPUs. Before flashing, we make sure to get a video signal from each rig, to watch the progress.
Flashing to Stilt's bios resulted in a dramatic temperature and power usage drop. We are still running at 800 core, for a final stability check.
We have set all GPUs to 800 mhz core until full launch. After the farm is fully up, we will apply overclocks and BIOS flashes.
For each rig, plug in one GPU and check for signal. Then, go into BIOS and set BOOT ON POWER or AC BACK. This way, we are able to boot the rigs by toggling power ON on the PSU. This lets us use the switched rack PDUs to reboot remotely.
Each rig runs ethOS on a 30GB Kingston drive.
We have a dedicated shelf for PSUs, PDUs, SSD, and spare cables. This allows us to tuck away anything that might impede airflow.
The GPUs are being connected to PCI-E power. For risers, we used our extra stock of 16x16x powered ribbon risers.
The tedious process of opening GPU and PSU packaging has been complete. All modular cables have been attached. All GPUs are now very close to their final provisioning location.
The first sections of GPUs have been placed. The L and I beams are secured with zip ties.
The dedicated monitor showcases the first live rig, and is attached to the side of the rack with zip ties. We are using 3 meter USB and DVI extension cables stemming from the keyboard and monitor, so that any troubled rig can be investigated quickly.
The AP7941 APC PDUs have been placed in the frames, and we have started the process of unpacking and assembling the modular PSUs.
The total cost of the farm skeleton is $2,212.30 ($44.25 per rig):
Item Qty Total Cost Motherboard Tray 50 $600.00 Support Beam 36 $512.30 Shelving 2 $1,100.00 Total Per rig (50 rigs) $2,212.30 $44.25
The Gigabyte and MSI 990FXA motherboards (with Athlon AM3 CPUs) allow for provisioning 6 GPUs per rig. Even though we are using 4 GPUs per rig, this will allow us to upgrade to the next generation of (more energy efficient) GPUs without much hassle.
Aluminum sheeting with standoffs isolate the motherboard from the wireframe racks.
The GPUs are mounted inside two L and I shaped steel beams, and suspended by another L shaped steel beam. This allows for easy swapping of GPUs without cutting any zip ties.
The first of two frames has been assembled. It has five shelves for motherboards and GPUs (for six rigs each), and five shelves for the PSUs.
Insulation has been added to the inside of the wall, along with foam-in-place and duct tape, to create an airtight seal. This is so that the hot air does not permeate back through the wall.
The fan has been installed. All that is left is to put insulation inside the wall around the fan, and duct tape the inside wall gaps. This will prevent the heat from flowing through the inside of the wall.
The electrical work is complete, PDU works.
Electrical work almost done.
Huge racks, 2.4 meters by 1.8 meters, 12 shelves per rack.
The drum fan uses around 890 Watts.
The breaker panel provides 150A 240V service. We are using 240V because it is slightly more efficient than 120V, but mainly because it helps to create a denser deployment.
To get the maximum amount of rigs that can run:
Amps * Volts * Power Factor (150*240*.8 = 28,800 continuous watts).
Since we are provisioning 50 rigs, each one taking 1000W, we need the help of the adjacent unit. Therefore, our electrician is running circuits six 30A L6-30R circuits from our panel, and four 30A l6-30R circuits from our neighbor tenant. We have an agreement to pay for the power that we use from the neighbor's panel.
Our plan is to mine Ethereum and autotrade to BTC using 50 rigs. Each rig will have four r9 290s.