May 072020
 
Picture of a Raspberry Pi 4 UART connected to a console port on a Synology Disktation 1813+

As a result of my Synology DS1813+ crashing yet again due to the Synology Memory issues and Crashing that I’ve been regularly experiencing, I finally decided to try hacking the Synology NAS to run another operating system. Let it also be noted that numerous of my readers are also experiencing these issues as I receive chats and e-mails about this almost on a daily basis.

Under the hood, the DS1813+ is just another x86 computer system. There’s no reason why we shouldn’t be able to hack this to run another Linux distribution or possibly even a BSD variant like FreeNAS.

Ultimately all I want from this is a reliable NAS to perform software RAID and provide an iSCSI target, it would also be kinda cool to see what we can install on it!

I’ve already started preliminary work on this, so keep visiting back as the blog post get’s updated with more and more information on a regular basis. If you feel you can contribute, please don’t hesitate to leave a comment or reach out.

Current Status

In this section, I’ll be updating it regularly with the current status of my efforts.

Completed:

  • Serial console access
  • UEFI Shell Access
  • GRUB Bootloader Access

See the below sections for information.

Accessing the DS1813+ system

There’s numerous different approachs we can take to try to gain access to repurpose the Synology Disk Station and install another operating system.

These include:

  • Accessing the serial console
  • Accessing the BIOS/UEFI and/or bootloader
  • Booting from a USB stick or modified HD
  • Modifying the USB DOM

The ultimate result we are looking for is to boot our own linux kernel, kick off a Linux or BSD OS installer, or boot from a modified drive that already has Linux installed on it.

Accessing the serial console

Serial console access to the Synology Diskstation is easily acheived.

I originally found this post which provided me information on the pinouts and the voltage: http://www.netbsd.org/ports/sandpoint/instSynology.html

While the above post is for older units utilizing architectures other than x86, the pinout information along wiht the voltage is still relevant.

With the Synology unit using 3.3v, you cannot use a normal computer RS-232 interface to connect to it as it runs at 5V. You’ll need to step-down the voltage using a converter or use a RS-232 interface that runs at 3.3v.

In my case, I used a Raspberry Pi 4 and one of the UART ports along with Minicom to access it. The Pi 4 uses 3.3v for UART so it works perfect. You’ll need Rx, Tx, and GRND for the connection to work.

Picture of a Raspberry Pi 4 with UART connection to ttyS0
Raspberry Pi 4 UART Connection ttyS0

In my case, I used the ttyS0 UART interface to avoid issues with the Mhz and timing (that’s experienced with using ttyAMA0). To use ttyS0, you’ll need to enable the UART on your Pi boot configs, as well as disable the Raspberry Pi console.

Picture of a Raspberry Pi 4 UART connected to a Synology DS1813+ serial console connection
Raspberry Pi 4 UART connected to Synology Diskstation DS1813+ console port

I used the following command to initialize minicom:

minicom -b 115200 -o -D /dev/ttyS0

After connecting, I was able to view and interact with the serial console.

Accessing the BIOS/UEFI and/or bootloader

After gaining serial console access, powering on the Synology DS1813+ results in the following:

Intel (R) Granite Well Platform
Copyright (C) 1999-2011 Intel Corporation. All rights reserved.
Product Name : GRANITE WELL
Processor : Intel(R) Atom(TM) CPU D2701 @ 2.13GHz
Current Speed : 2.12 GHz
Total Memory : 4096 MB
Intel BLDK Version : Tiano-GraniteWell (Allegro 0.3.7)

Miscellaneous Info

Memory Ref Code Version :
CDV Ref Code Version : 0.9.0-1
P-Unit Firmware Version :
P-Unit Location in Flash : 0xFFFB0000
P-Unit Location in RAM : 0xDF6F0000
No of SATA ports available : 6
No of SATA ports enabled : 6

Press F10 in 3 seconds to list all boot options
Any other key to active boot…

Unfortunately, I’m unable to press F10 due to terminal emulation issues (it’s also possible they’ve removed this feature to stop someone from doing what I’m doing).

After 10 seconds, the Synology will UEFI boot the GRUB bootloader.

You can browse through the list, edit the entries, as well as run the GRUB command line.

Booting from a USB stick or modified HD

I attempted to boot numerous different USB sticks containing either OS installers (Linux variants and FreeNAS) with no success. I also tried to boot off an HD connected to one of the SATA ports in the NAS, this was also unsuccesful.

I noticed that out of the 8 SATA connections, ports 1-6 are treated differently (possibly being on a SATA expander) and 7-8 may be accessed by the UEFI, BIOS, or bootloader.

I attempted to chainload a CD image written to a USB stick, however GRUB is not able to see any USB or HDs other than the SATA DOM it’s residing on.

Removing the SATA DOM presents you with a UEFI shell, however you are unable to see, view, or execute any efi files as the shell is unable to read any USB or HD devices other than the SATA DOM.

It appears both the UEFI/BIOS and GRUB have been modified to either not allow access to other bootable devices, or drivers are required which haven’t been incorporated.

In order to execute our own kernel or OS, we may need to modify the SATA DOM.

Modifying the USB DOM

The onboard USB DOM appears to be the only bootable device that is presented to the UEFI/BIOS.

On a booted system, the DOM appears as the device “/dev/synoboot”.

While logged in to the Synology via SSH, you are unable to mount this device to a mount point. You can however image the device, copy it, and write it to another device on another system.

To image the USB DOM, I ran the following command:

dd if=/dev/synoboot of=/volume1/ShareName/synoboot-image

I than downloaded the “synoboot-image” image file to another Linux system, wrote it to a USB stick and I was able to mount the partitions.

There are two vfat partitions containing some linux kernels, ramdisks, and the UEFI version of GRUB.

I believe in an effort to move forward, we will need to either modify and incorporate a version of GRUB with extra drivers, or we will need to use the existing version to boot our own kernel and initial ramdisk.

At this point, we’ll need to evaluate how to write to the SATA DOM. There are two options:

  • Modify the image we created, and write it back after copying it back to the Synology NAS.
  • Find a way to directly mount and access the partitions on the Synology NAS, at this point we are unable due to “access is denied”, however dd reading functions.
  • Connect the SATA DOM to the USB headers on another system.

Once we access this SATA DOM, it may be possible to copy the kernels and ramdisks to kick off an OS installer, or better yet install a more feature and driver filled version of GRUB.

Aug 122019
 
DS1813+

Around a month ago I decided to turn on and start utilizing NFS v4.1 (Version 4.1) in DSM on my Synology DS1813+ NAS. As most of you know, I have a vSphere cluster with 3 ESXi hosts, which are backed by an HPE MSA 2040 SAN, and my Synology DS1813+ NAS.

The reason why I did this was to test the new version out, and attempt to increase both throughput and redundancy in my environment.

If you’re a regular reader you know that from my original plans (post here), and than from my issues later with iSCSI (post here), that I finally ultimately setup my Synology NAS to act as a NFS datastore. At the moment I use my HPE MSA 2040 SAN for my hot storage, and I use the Synology DS1813+ for cold storage. I’ve been running this way for a few years now.

Why NFS?

Some of you may ask why I chose to use NFS? Well, I’m an iSCSI kinda guy, but I’ve had tons of issues with iSCSI on DSM, especially MPIO on the Synology NAS. The overhead was horrible on the unit (result of the lack of hardware specs on the NAS) for both block and file access to iSCSI targets (block target, vs virtualized (fileio) target).

I also found a major issue, where if one of the drives were dying or dead, the NAS wouldn’t report it as dead, and it would bring the iSCSI target to a complete halt, resulting in days spending time finding out what’s going on, and then finally replacing the drive when you found out it was the issue.

After spending forever trying to tweak and optimize, I found that NFS was best for the Synology NAS unit of mine.

What’s this new NFS v4.1 thing?

Well, it’s not actually that new! NFS v4.1 was released in January 2010 and aimed to support clustered environments (such as virtualized environments, vSphere, ESXi). It includes a feature called Session trunking mechanism, which is also known as NFS Multipathing.

We all love the word multipathing, don’t we? As most of you iSCSI and virtualization people know, we want multipathing on everything. It provides redundancy as well as increased throughput.

How do we turn on NFS Multipathing?

According to the VMware vSphere product documentation (here)

While NFS 3 with ESXi does not provide multipathing support, NFS 4.1 supports multiple paths.


NFS 3 uses one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or hostname for the NFS server, and does not support multiple paths. Depending on your network infrastructure and configuration, you can use the network stack to configure multiple connections to the storage targets. In this case, you must have multiple datastores, each datastore using separate network connections between the host and the storage.


NFS 4.1 provides multipathing for servers that support the session trunking. When the trunking is available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is not supported.

So it is supported! Now what?

In order to use NFS multipathing, the following must be present:

  • Multiple NICs configured on your NAS with functioning IP addresses
  • A gateway is only configured on ONE of those NICs
  • NFS v4.1 is turned on inside of the DSM web interface
  • A NFS export exists on your DSM
  • You have a version of ESXi that supports NFS v4.1

So let’s get to it! Enabling NFS v4.1 Multipathing

  1. First log in to the DSM web interface, and configure your NIC adapters in the Control Panel. As mentioned above, only configure the default gateway on one of your adapters.Synology Multiple NICs Configured Screenshot
  2. While still in the Control Panel, navigate to “File Services” on the left, expand NFS, and check both “Enable NFS” and “Enable NFSv4.1 support”. You can leave the NFSv4 domain blank.Enabling NFSv4.1 on Synology DSM
  3. If you haven’t already configured an NFS export on the NAS, do so now. No further special configuration for v4.1 is required other than the norm.
  4. Log on to your ESXi host, go to storage, and add a new datastore. Choose to add an NFS datastore.
  5. On the “Select NFS version”, select “NFS 4.1”, and select next.Selecting the NFS version on the Add Datastore Dialog box on ESXi
  6. Enter the datastore name, the folder on the NAS, and enter the Synology NAS IP addresses, separated by commas. Example below:New NFS Datastore details and configuration on ESXi dialog box
  7. Press the Green “+” and you’ll see it spreads them to the “Servers to be added”, each server entry reflecting an IP on the NAS. (please note I made a typo on one of the IPs).List of Servers/IPs for NFS Multipathing on ESXi Add Datastore dialog box
  8. Follow through with the wizard, and it will be added as a datastore.

That’s it! You’re done and are now using NFS Multipathing on your ESXi host!

In my case, I have all 4 NICs in my DS1813+ configured and connected to a switch. My ESXi hosts have 10Gb DAC connections to that switch, and can now utilize it at faster speeds. During intensive I/O loads, I’ve seen the full aggregated network throughput hit and sustain around 370MB/s.

After resolving the issues mentioned below, I’ve been running for weeks with absolutely no problems, and I’m enjoying the increased speed to the NAS.

Additional Important Information

After enabling this, I noticed that RAM and Memory usage had drastically increased on the Synology NAS. This would peak when my ESXi hosts would restart. This issue escalated to the NAS running out of memory (both physical and swap) and ultimately crashing.

After weeks of troubleshooting I found the processes that were causing this. While the processes were unrelated, this issue would only occur when using NFS Multipathing and NFS v4.1. To resolve this, I had to remove the “pkgctl-SynoFinder” package, and disable the services. I could do this in my environment because I only use the NAS for NFS and iSCSI. This resolved the issue. I created a blog post here to outline how to resolve this. I also further optimized the NAS and memory usage by disabling other unneeded services in a post here, targeted for other users like myself, who only use it for NFS/iSCSI.

Leave a comment and let me know if this post helped!

Jul 312019
 

If you’re like me and use a Synology NAS as an NFS or iSCSI datastore for your VMware environment, you want to optimize it as much as possible to reduce any hardware resource utilization.

Specifically we want to disable any services that we aren’t using which may use CPU or memory resources. On my DS1813+ I was having issues with a bug that was causing memory overflows (the post is here), and while dealing with that, I decided to take it a step further and optimize my unit.

Optimize the NAS

In my case, I don’t use any file services, and only use my Synology NAS (Synology DS1813+) as an NFS and iSCSI datastore. Specifically I use multipath for NFSv4.1 and iSCSI.

If you don’t use SMB (Samba / Windows File Shares), you can make some optimizations which will free up substantial system resources.

Disable and/or uninstall unneeded packages

First step, open up the “Package Center” in the web GUI and either disable, or uninstall all the packages that you don’t need, require, or use.

To disable a package, select the package in Package Center, then click on the arrow beside “Open”. A drop down will open up, and “Disable” or “Stop” will appear if you can turn off the service. This may or may not be persistent on a fresh boot.

To uninstall a package, select the packet in Package Center, then click on the arrow beside “Open”. A drop down will open up, and “Uninstall” will appear. Selecting this will uninstall the package.

Disable the indexing service

As mentioned here, the indexing service can consume quite a bit of RAM/memory and CPU on your Synology unit.

To stop this service, SSH in to the unit as admin, then us the command “sudo su” to get a root shell, and finally run this command.

synoservice --disable pkgctl-SynoFinder

The above command will probably not persist on boot, and needs to be ran each fresh boot. You can however uninstall the package with the command below to completely remove it.

synopkg uninstall SynoFinder

Doing this will free up substantial resources.

Disable SMB (Samba), and NMBD

I noticed that both smbd and nmbd (Samba/Windows File Share Services) were consuming quite a bit of CPU and memory as well. I don’t use these, so I can disable them.

To disable them, I ran the following command in an SSH session (remember to “sudo su” from admin to root).

synoservice --disable nmbd
synoservice --disable samba

Keep in mind that while this should be persistent on boot, it wasn’t on my system. Please see the section below on how to make it persistent on booth.

Disable thumbnail generation (thumbd)

When viewing processes on the Synology NAS and sorting by memory, there are numerous “thumbd” processes (sometimes over 10). These processes deal with thumbnail generation for the filestation viewer.

Since I’m not using this, I can disable it. To do this, we either have to rename or delete the following file. I do recommend making a backup of the file.

/var/packages/FileStation/target/etc/conf/thumbd.conf

I’m going to rename it so that the service daemon can’t find it when it initializes, which causes the process not to start on boot.

cd /var/packages/FileStation/target/etc/conf/
mv thumbd.conf thumbd.conf.bak

Doing the above will stop it from running on boot.

Make the optimizations persistent on boot

In this section, I will show you how to make all the settings above persistent on boot. Even though I have removed the SynoFinder package, I still will create a startup script on the Synology NAS to “disable” it just to be safe.

First, SSH in to the unit, and run “sudo su” to get a root shell.

Run the following commands to change directory to the startup script, and open a text editor to create a startup script.

cd /usr/local/etc/rc.d/
vi speedup.sh

While in the vi file editor, press “i” to enter insert mode. Copy and paste the code below:

case "$1" in
    start)
                echo "Turning off memory garbage"
                        synoservice --disable nmbd
                        synoservice --disable samba
                        synoservice --disable pkgctl-SynoFinder
                        ;;
    stop)
                        echo "Pertend we care and are turning something on"
                        ;;
        *)
        echo "Usage: $1 {start|stop}"
                exit 1
esac
exit 0

Now press escape, then type “:wq” and hit enter to save and close the vi text editor. Run the following command to make the script executable.

chmod 755 speedup.sh

That’s it!

Conclusion

After making the above changes, you should see a substantial performance increase and reduction in system resources!

In the future I plan on digging deeper in to optimization as I still see other services I may be able to trim down, after confirming they aren’t essential to the function of the NAS.

Feel like you can add anything? Leave a comment!

Jul 312019
 

Once I upgraded my Synology NAS to DSM 6.2 I started to experience frequent lockups and freezing on my DS1813+. The Synology DS1813+ would become unresponsive and I wouldn’t be able to SSH or use the web GUI to access it. In this state, NFS sometimes would become unresponsive.

When this occured, I would need to press and hold the power button to force it to shutdown, or pull the power. This is extremely risky as it can cause data corruption.

I’m currently running DSM 6.2.2-24922 Update 2.

The cause

This occurred for over a month until it started to interfere with ESXi hosts. I also noticed that the issue would occur when restarting any of my 3 ESXi hosts, and would definitely occur if I restarted more than one.

During the restarting, while logged in to the web GUI and SSH, I was able to see that the memory (RAM) usage would skyrocket. Finally the kernel would panic and attempt to reduce memory usage once the swap file had filled up (keep in mind my DS1813+ has 4GB of memory).

Analyzing “top” as well as looking at processes, I noticed the Synology index service was causing excessive memory and CPU usage. On a fresh boot of the NAS, it would consume over 500MB of memory.

The fix (Please scroll down and see updates)

In my case, I only use my Synology NAS for an NFS/iSCSI datastore for my ESXi environment, and do not use it for SMB (Samba/File Shares), so I don’t need the indexing service.

I went ahead and SSH’ed in to the unit, and ran the following commands to turn off the service. Please note, this needs to be run as root (use “sudo su” to elevate from admin to root).

synoservice --disable pkgctl-SynoFinder

While it did work, and the memory was instantly freed, the setting did not stay persistant on boot. To uninstalling the Indexing service, run the following command.

synopkg uninstall SynoFinder

Doing this resolved the issue and freed up tons of memory. The unit is now stable.

Update May 31st, 2020 – Increased Stability

After troubleshooting I noticed that the majority of stability issues would start occurring when ESXi hosts accessing NFS exports on the Synology diskstation are restarted.

I went ahead and stopped using NFS, started using iSCSI with MPIO, and the stability of the Synology NAS has greatly improved. I will continue to monitor this.

I still have plans to hack the Synology NAS and put my own OS on it.

Update May 2nd, 2020 – It’s still crashing, and really frustrating me

Today I had to restart my 3 ESXi hosts that are connected to the NFS export on the Synology Disk Station. After restarting the hosts, the Synology device has gone in to a lock-up state once again. It appears the issue is still present.

The device is responding to pings, and still provides access to SMB and NFS, but the web GUI, SSH, and console access is unresponsive.

I’m officially going to start planning on either retiring this device as this is unacceptable, especially in addition to all the issues over the years, or I may try an attempt at hacking the Synology Diskstation to run my own OS.

Update April 21st, 2020 – What I thought was the fix

After a few more serious crashes and lockups, I finally decided to do something about this. I went ahead and backed up my data, deleted the arrays, performed a factory reset on the Synology Disk Station. I also zero’d the metadata and MBR off all the drives.

I then configured the Synology NAS from scratch, used Btrfs (instead of ext4), restored the backups.

The NAS now appears to be running great and has not suffered any lockups or crashses since. I’ve also been noticing that memory management is working a lot better.

I have a feeling that this issue was caused due to the long term chaining of updates (numerous updates installed over time), or the use of the ext4 filesystem.

Update March 20th, 2020

As of March 2020 this issue is still occurring on numerous new firmware updates and version. I’ve tried reaching out to Synology on twitter directly a few times about this issue as well as e-mail (indirectly regarding something else) and have still not received or heard a response. As of this time the issue is still occurring on a regular basis on DSM 6.2.2-24922 Update 4. I’ve taken production and important workloads of the device since I can’t have the device continuously crashing or freezing overnight.

Update – August 16th, 2019

My Synology NAS has been stable since I applied the fix, however after an uptime of a few weeks, I noticed that when restarting servers, the memory usage does hike up (example, from 6% to 46%). However, with the fixes applied above, the unit is stable and no longer crashes.