I definitely know Roon, but don’t want another piece of dedicated hardware (the Roon core) in my set-up. Just yet, anyway. Maybe some day.
Intermediate analysis uncovers even some invalid FLAC’s (that’s bad IMHO). My wild guess is that these encoding problems are caused by buffer overruns or resource leaks, or possibly priority interrupts causing the ripping pipeline to lose some data and encode zeroes in the output.
[In retrospect I can see why Naim claims “The Uniti Star is set to rip to uncompressed WAV by default. We believe this gives the best Sound Quality.” if they are aware of these encoding problems,
but that is contradicted slightly by “The advanced data handling algorithms ensure the ripped data is always a “bit perfect” copy”…]
Most of the ripping I do is while I’m in the room, listening to music (streamed or ripped) on the Star, and possibly doing some metadata editing in the Naim app (fixing coverart or titles etc.). It should be able to handle that, but any concurrency bugs in the firmware could/would lead to situations where things can (and by my experience with the Star will) go astray.
Once the data transfer has completed, it might be a good idea to run rsync a second time with --dry-run --checksum options to make sure that no errors have been introduced when storing the trasferred data on the target.
Data transfer errors are extremely unlikely but storing errors can happen (and do happen) when backupping terabytes of data.
The checksum takes a lot of time, but it brings the probability of differences between the source and the backup copies down to almost zero.
Well, it wasn’t even a network rsync, as I’d just taken the SD card out of the Star and inserted it into my computer, so it’s a local copy from SD to SSD (but I like rsync for its incremental transfer capabilities). However, thanks for your concern!
I would expect the ripping software of the Uniti Star to checksum rips against AccorateRip or similar databases before storing to files.
Therefore, I would think that, if your drive contains invalid FLAC files, perhaps there is something wrong with the drive itself or perhaps with the drivers of the Uniti Star.
Anyway, if you conjecture a relationships between the reboot and/or freeze events and the file errors, you could perhaps try to find out whether the number of invalid FLAC files increases when such events occur.
You could also try to rip your collection on another computer to check whether the Uniti Star operates normally if you do not rip.
I very much doubt that Naim claims that “The Uniti Star is set to rip to uncompressed WAV by default. We believe this gives the best Sound Quality.” because they are aware of encoding problems.
I guess the default is to rip to WAV for historical reasons which is a very bad idea anyway: if you rip to WAV, the metadata are encoded in a proprietary, non-exportable format. This basically means that the rips are not portable and that the metadata can only be read by Naim systems.
Expecting but not verifying such things is never a good idea
(by the way, the ripper built into Roon doesn’t. It uses CD Paranoia at high settings but no AccuRip checks).
The Uniti Star is set to rip to uncompressed WAV by default. We believe this gives the best Sound Quality. However, FLAC can be chosen which results in a smaller file size and a larger amount of files can be stored on an equivalent sized drive.
Yes, I use rsync for all my backup duties, no matter whether via USB or Ethernet.
It is anyway a good idea to checksum backup copies from time to time, even if the data transfer is via USB: rsync itself ensures that the data transfer itself is correct (no matter how this is done) but then it delegates the writing of the transferred files to the target OS.
On very large data sets, write to disk errors can occur. This is why it is important to run rsync a second time with checksum and dry-run options. This spots (on very large data not so unlikely) eventual write-to-disk errors.
This second “verification” run should be an integral part of any backup work-flow. It takes a lot of time but it makes sure that the quality of backups is preserved over time.
The alternative is to end up with a (very small but) increasingly number of differences between original source and backups.
Possibly. Audio-quality wise there is zero difference between WAV and FLAC anyway (they end up being the same bitstream going into the DAC), and as you say, FLAC metadata is portable, and WAV is not. And one can also fit way more FLAC’s on a storage medium than WAV’s.
That was indeed the source of my quote. (Doesn’t everyone read all these FAQ’s when owning a complex machine that comes without any manual to speak of? )
True but … I am not going to buy a Uniti Star just to check how it verifies rips :-)! I have by the way ripped all my CDs with cdparanoia. If I remember correctly, it was set to match every chunck 3 times, but it was many years ago!
Of course, but as long as it is not known with certainty that it does, and as long as other evidence does not rule it out, the possibility remains that the errors are introduced at rip time
Sure, that’s perfectly possible. I would try to rip a CD while the device has no internet connection. If the ripping software does an AccurateRip check and is reasonably well written, it should at least issue a warning. But who knows …
The decompression of FLAC needs computer resources and this can theoretically create additional electrical noise. However, the general consensus seems to be that it does not matter on the new streamer platform, but that there was at least a small audible change by this on the old platform
I’m pretty sure the errors were introduced at rip time, since after re-ripping the CD’s that had FLAC errors, they no longer had the errors (I made sure the Star wasn’t doing anything else when I ripped them a second time).
And again, these were not CD’s of which the Star indicated that they had ripping errors. (For those without a Star: it has a separate section in the music library for CD’s with ripping errors, so they’re easy to find.)
So … the results are in. Of some 7000 tracks, 51 had real FLAC errors (according to flac -a):
36 had FLAC__STREAM_DECODER_ERROR_STATUS_LOST_SYNC
9 had FLAC__STREAM_DECODER_ERROR_STATUS_UNPARSEABLE_STREAM
6 had FLAC__STREAM_DECODER_ERROR_STATUS_FRAME_CRC_MISMATCH
and 55 had areas of zeroes in them (audible as ‘hiccups’).
In the interest of science, and for those wishing to replicate my work (or criticise my methodology), I ran the analysis with:
flac -sfa --skip=00:10 --until=-00:20
i.e. skip the first 10 and the last 20 seconds, as some tracks have leading and/or trailing silence.
[Those numbers are a bit of trial & error; I went through a 1-2-5 cycle to minimise the number of false positives; it appears I have some (classical) tracks with just south of 20 seconds of silence at the end.]
[Also interesting to see how many tracks I have that last less than 30 seconds (yes, flac will complain; I ignored these files).]
That gave 51 errors.
And when flac was done, I looked for analysis files that had type=CONSTANT value=0 in them. That gave 55 matches.
Anyway, I’ve got my work cut out for me: re-rip the ones with errors, and check again when done. Will take a while, so don’t hold your breath.
FLAC decompression takes some resources, but not a whole lot (and more or less independent of the compression level). Remember: decompression is a lot simpler than compression. If anyone has some scientifically hard evidence that this extra ‘electrical noise’ can be measured at the audio output stage, I’d be very surprised.