At wits end - new QNAP nas won't stop hdd churning

I’d be happy to change the raid setup but I dont know how. The way I had it set up at first there was a “thick volume” that quickly filled; changing it to “thin” let me have one volume that spans essentially the entire 4tb.

I’ve unplugged it from the network but it’s still goin’ and goin’ and goin’ . . . :frowning:

Don’t know much about how the thick/thin setup works but on normal raid drives I think I would be tempted to pull one of the disks and see what happens (assuming you have a good back up if things get messy)

You could always start again (if backup exists).

Also, it’s worth contact QNAP support, they can be helpful and should at least give you a more informed action plan.

Hi, this wont be anything to do with how you have the RAID set or thick/thin volume. RAID levels are used to boost performance and/or reliability. Thick/thin is about how space is allocated to individual LUNs on a volume. The latter is a difficult concept but in essence with thick provisioning you allocate a specific amount of storage to a LUN or target, once its allocated it is effectively used solely for that resource. With thin provisioning you can over allocate and sort of use it twice. You don’t really get double use, what it means is the spare space is effectively shared by all resources meaning you get a better disk usage overall. Thin provisioning can also trip you up though, its easy to run out of space even if the NAS says there is some left, it needs managing and some alerts setting.

Either the QNAP is not going into low power mode or the disk firmware is set for constant 7200rpm rotation. Disks designed for home user and small business NAS are both RAID aware and are designed to have a variable rotation speed to keep noise down and power usage low. Disks designed for Enterprise NAS use never slow down, they run all the time as performance is higher when stressed and devices usually run in server rooms. It’s to do with initial seek times when requesting a file, if the disks have to spin up first the seek time is high, so Enterprise disks don’t slow down.

I checked the Seagate specs but its not obvious whether the IronWolf does this or not, you’d need to search a bit harder than i did. I generally use WD disks, the RED ones are NAS aware and variable speed, the GOLD both NAS aware and constant speed. Same for desktop and laptop disks, BLUE are variable and BLACK constant. The noise difference with constant speed is significant but for business use they have better response times.

Sorry i cant help more, I suggest some research into how the IronWolf manages its spin speed.

Phil

Bit more info. Page 16 of the Seagate IronWolf manual. The disks should support 4 power modes. Active, Idle, Standby, Sleep. In the latter 2 the spindles do not rotate. This would indicate the QNAP is not entering low disk power mode. Either it is not set / working or the QNAP is doing something as a background task keeping the disks on.

Another thought is whether something on your LAN or the WAN might be regularly accessing content on the NAS.

Don’t have a QNAP but if there are nay WAN facing services (remote access for example) hopfully you’ll have changed or been asked to create non-default passwords at some point.

Just a thought but if your a Roon user which I think your are and it’s using the Nas as its music store, then it could be keeping the Nas awake if it’s doing any back ground analysis, scanning of the library etc.

I don’t remember my QNAP ever going to sleep though in the years I have had it but it’s running many applications and is not my music store but a backup. It might also be making snapshots of your data as this can take a while if you had this on.

Thanks – my Roon store is now in my Nucleus, and while this nas was set as the Roon backup, it still churns when unplugged from the network.

It runs this way when disconnected from the network :frowning:

It’s def. set on the nas - I suspect ‘background task’ but the little log I ran is somewhat uninformative to me! I’ll re-run a log and post shortly thanks.

QNAP support can login to check it out. They did that for me once to fix something. I take it you turned off all apps to see if one of those is the culprit.?

Ive had no luck getting a response from qnap support – any tips? :slight_smile: Tried their web form, and a post on the FB account.

I just did their web form. Usually got a response in a day or so. But that was pre covid.

This is the current log - it all looks like this:

Start…
============= 0/100 test, Sat Jun 27 15:00:03 EDT 2020 ===============
<7>[22600.482431] smbstatus(24482): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 1/100 test, Sat Jun 27 15:01:57 EDT 2020 ===============
<7>[22630.726257] smbstatus(4311): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0
<7>[22630.727280] smbstatus(4311): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 2/100 test, Sat Jun 27 15:02:27 EDT 2020 ===============
<7>[22721.525385] jbd2/md9-8(2243): WRITE block 638480 on unknown-block(9,9) (8 sectors)

============= 3/100 test, Sat Jun 27 15:04:00 EDT 2020 ===============
<7>[22782.608609] smbstatus(17664): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 4/100 test, Sat Jun 27 15:04:59 EDT 2020 ===============
<<<7>[23058.603765] jbd2/md9-8(2243): WRITE block 639168 on unknown-block(9,9) (8 sectors)
<7>[23058.603842] jbd2/md9-8(2243): WRITE block 36040 on unknown-block(9,9) (8 sectors)
<7>[23058.603849] jbd2/md9-8(2243): WRITE block 36048 on unknown-block(9,9) (8 sectors)
<7>[23058.603854] jbd2/md9-8(2243): WRITE block 36056 on unknown-block(9,9) (8 sectors)
<7>[23058.603860] jbd2/md9-8(2243): WRITE block 36064 on unknown-block(9,9) (8 sectors)
<7>[23058.603866] jbd2/md9-8(2243): WRITE block 36072 on unknown-block(9,9) (8 sectors)

============= 5/100 test, Sat Jun 27 15:09:35 EDT 2020 ===============
<<<7>[23150.406821] smbstatus(27284): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 6/100 test, Sat Jun 27 15:11:07 EDT 2020 ===============
<<7>[23425.509984] smbstatus(14715): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 7/100 test, Sat Jun 27 15:15:43 EDT 2020 ===============
<<7>[23489.726223] jbd2/md9-8(2243): WRITE block 889528 on unknown-block(9,9) (8 sectors)

============= 8/100 test, Sat Jun 27 15:16:46 EDT 2020 ===============
<7>[23624.679720] jbd2/md9-8(2243): WRITE block 10808 on unknown-block(9,9) (8 sectors)

============= 9/100 test, Sat Jun 27 15:19:02 EDT 2020 ===============
<7>[23699.855277] smbstatus(8039): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 10/100 test, Sat Jun 27 15:20:16 EDT 2020 ===============
<<7>[23761.405543] smbstatus(25953): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 11/100 test, Sat Jun 27 15:21:18 EDT 2020 ===============
<<<<<<<<<<<<<<<<<<<<<<<7>[23787.584026] jbd2/md9-8(2243): WRITE block 795760 on unknown-block(9,9) (8 sectors)

============= 12/100 test, Sat Jun 27 15:21:44 EDT 2020 ===============
<7>[23789.715303] disk_manage.cgi(2754): dirtied inode 7146 (qpkgStatus.con~) on md9

============= 13/100 test, Sat Jun 27 15:21:46 EDT 2020 ===============
<7>[24158.947050] jbd2/md9-8(2243): WRITE block 19280 on unknown-block(9,9) (8 sectors)

============= 14/100 test, Sat Jun 27 15:27:55 EDT 2020 ===============
<<<<<7>[24236.233074] jbd2/md9-8(2243): WRITE block 20032 on unknown-block(9,9) (8 sectors)
<7>[24236.233081] jbd2/md9-8(2243): WRITE block 20040 on unknown-block(9,9) (8 sectors)
<7>[24236.233087] jbd2/md9-8(2243): WRITE block 20048 on unknown-block(9,9) (8 sectors)
<7>[24236.233093] jbd2/md9-8(2243): WRITE block 20056 on unknown-block(9,9) (8 sectors)
<7>[24236.233099] jbd2/md9-8(2243): WRITE block 20064 on unknown-block(9,9) (8 sectors)

============= 15/100 test, Sat Jun 27 15:29:12 EDT 2020 ===============
<7>[24427.219058] jbd2/dm-0-8(3126): WRITE block 755259168 on unknown-block(253,0) (8 sectors)

============= 16/100 test, Sat Jun 27 15:32:24 EDT 2020 ===============
<<7>[24585.484299] smbstatus(21624): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 17/100 test, Sat Jun 27 15:35:02 EDT 2020 ===============
<<<7>[24601.682043] md9_raid1(2230): WRITE block 1060216 on unknown-block(8,16) (1 sectors)

============= 18/100 test, Sat Jun 27 15:35:18 EDT 2020 ===============
<7>[24739.514353] smbstatus(1225): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 19/100 test, Sat Jun 27 15:37:36 EDT 2020 ===============
<<<<7><<7>[25226.372325] rsyslogd(30782): dirtied inode 13286 (kmsg) on md9
<7>[25226.372368] jbd2/md9-8(2243): WRITE block 995760 on unknown-block(9,9) (8 sectors)

============= 20/100 test, Sat Jun 27 15:45:43 EDT 2020 ===============
<<<<7>[25439.509647] jbd2/md9-8(2243): WRITE block 31408 on unknown-block(9,9) (8 sectors)

============= 21/100 test, Sat Jun 27 15:49:16 EDT 2020 ===============
<<<7><<7>[25447.122964] jbd2/dm-0-8(3126): WRITE block 755259768 on unknown-block(253,0) (8 sectors)

============= 22/100 test, Sat Jun 27 15:49:24 EDT 2020 ===============
<<7>[25562.539198] smbstatus(31972): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 23/100 test, Sat Jun 27 15:51:19 EDT 2020 ===============
<7>[25632.978950] jbd2/dm-0-8(3126): WRITE block 755259

============= 24/100 test, Sat Jun 27 15:52:29 EDT 2020 ===============
<<7>[25868.425210] smbstatus(13046): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 25/100 test, Sat Jun 27 15:56:25 EDT 2020 ===============
<<<<7>[26223.824996] jbd2/md9-8(2243): WRITE block 791592 on unknown-block(9,9) (8 sectors)

============= 26/100 test, Sat Jun 27 16:02:20 EDT 2020 ===============
<<7>[26447.589575] smbstatus(1457): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 27/100 test, Sat Jun 27 16:06:04 EDT 2020 ===============
<<7>[26631.853853] smbstatus(1833): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 28/100 test, Sat Jun 27 16:09:08 EDT 2020 ===============
<<7>[26662.588712] smbstatus(12291): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 29/100 test, Sat Jun 27 16:09:39 EDT 2020 ===============
<7>[26805.468084] md9_raid1(2230): WRITE block 1060232 on unknown-block(8,0) (1 sectors)
<7>[26805.468104] md9_raid1(2230): WRITE block 1060232 on unknown-block(8,16) (1 <<<7>[2

============= 30/100 test, Sat Jun 27 16:12:03 EDT 2020 ===============
<7>[27336.400269] smbstatus(3092): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 31/100 test, Sat Jun 27 16:20:53 EDT 2020 ===============
<<<<7>[27457.759781] md1_raid1(2581): WRITE block 7794127504 on unknown-block(8,0) (1 sectors)
<7>[27457.759812] md1_raid1(2581): WRITE block 7794127504 on unknown-block(8,16) (1 sectors)

Re-did the ticket, got an automated response, so maybe I’ll actually connect with them.

This will be of no use to you at all, I’m afraid, but my first Unitiserve was exactly the same. The disc would never stop whirring away, producing a noise that was just loud enough to be slightly annoying. When I asked Naim about it, they said it was normal, but frankly I should have pushed them for a better answer than that. Fortunately I didn’t need to, as the the damned thing was destroyed by a lightning strike through the phone line. The replacement was, and still is completely silent.

Another purpose is “availability” - i.e. with RAID-1, if 1 drive physically fails, the server/service stays available without downtime. (Including replacement of the broken drive by a new one and re-built of redundant status.)

This is not a “backup” (i.e. a single SW fault or delete command can still erase everything in a second), but a convenient way to increase “uptime”. (Essential for business environment, convenience at home.)

1 Like

Exactly. RAID 0 is for performance but carries a higher risk of data loss. I use RAID 0 for image processing etc but the data is backed up onto two separate NAS systems.

RAID 1 offers a degree of protection of disk failure, but as you say, shouldn’t be used as a backup… and means your data will continue to be available while waiting for a new drive. Although it is recommened to have a spare drive handy so it can rebuild asap.

Deletion is easy protected with snapshots which is part of Qnaps raid system. Very handy feature I have used a few times. If the Nas is your primary storage for music video etc then raid is no backup at all but as you say it allows the device to still be operational in case of disk failure. In my opinion Nas should be used as a backup to other sources not the main source. That said I have all my video on mine but I don’t really care if I loose it as its all easily replaceable from the other backup I make of the Nas.

I have a Qnap NAS in Raid 1 and this is one way synced with a remote hdd. I also take a manual back up on a portable ssd, which I keep secure away from the house. When I have ever had an issue with my NAS I have logged into it and gone to the help centre and raised a ticket. I may just be lucky but I have always found their support to be very helpful.