Fat32 vs ntfs

vbimport

#1

Why do laptops with xp installed run on fat32 and not ntfs. Is there some kind of advantage
Ok, the thing is I was asked by my neigbor to restore here laptop. It is brand new but some how she realy messed it up. when i put the set of restore disks in, The first one started it up in windows 98 and formated it to fat32 The the next 3 added files and programs. Then was told to put first disk back in and it installed xp, but it runs on fat32 ,WHY
it formated the whole hard drive to fat32, and made it into 2 equal partions dividing the 160 gig in two equals. When the first disk is put in, as it firers up you can see the same screen as when you used to put the little floppy boot disk in for win 98. it says something about win 98 and real fast it starts formating with no options.


#2

I can only imagine they’re using an XP Update and not a Full version. Very strange. You could still convert it to NTFS after the installation though.


#3

For starters, they’re using Win98 boot discs, which are DOS and cannot see an NTFS partition. If the restore programs are DOS programs, the same applies. Pretty lazy programming on the part of the laptop maker, IMHO. But unless you want to go searching for drivers, you’re stuck using the maker’s driver restore discs. If these discs are accessible in WinXP, there’s the possibility you can just install XP then get the drivers off the discs. But it sounds like they’re using a proprietary restore disc with an ISO of XP on it, so you have no other way of getting the OS installed.


#4

Win98 can only use FAT or FAT32. That is why the hard drive was formatted in FAT to begin with. Only NT based operating systems can use NTFS. If you convert a FAT partition into NTFS, you will get small 4 kb clusters resulting in loss of performance. If you have Partition Magic, you can convert it and then change the cluster size afterwards. Only certain versions of PM can do this.


#5

I’ve changed my FAT32 hardisk to NTFS with partition magic.
Now the cluster size is 512bits.

Should I leave it this way or change the size?


#6

That’s the default cluster size for conversions. 4KB is the default size for fresh formats. Changing it will gain you very little. Smaller clusters mean less wasted space, but more overhead in the file system. Larger clusters mean more wasted space and less overhead. It’s a trade-off, but you’ll not notice much, if any, difference in the “feel” of the system. The difference will be most apparent on large file transfers.


#7

So this means with a cluster size of 512bits I just have to defragmend the disk more often to keep it optimal.


#8

Cluster size has no impact on fragmentation. But you should defrag regularly in any case. Clusters are only the allocation units for the file system. So if you have 512 clusters and store a 1 KB file, it will use 2 clusters (1KB) exactly. If you have 4 KB clusters, and store the same 1 KB file, it will use 4 KB. In the file properties, you will see “size of file = 1 KB”, “size on disc = 4KB” If you store a bigger file, say 32 KB, it will use 64 clusters in the former case, and 8 in the latter, which is where the overhead comes in. To read or write that file, the drive has to index 4x as many clusters in the former case. It takes an extra smidgen of time to do that, but in the day to day normal operating, you won’t notice it. The 8 MB cache on the drive will handle a lot of the writing load and the files will get written just as fast. YMMV.


#9

Thnx for explaining it.
I will leave it as it is.


#10

I use NTFS for several reasons:

  1. I think it’s more reliable, but I could be wrong. I’ve never had any corrupted files, except when the drive got sick.

  2. I don’t have to sit there while the OS scans my disk when booting after an improper shutdown. This alone is all the reason I need to go to NTFS.

  3. I don’t get those FOUND.000 files cluttering up my partitions.

XP has the convert.exe command to convert FAT to NTFS. I’ve used it a few times and it never failed. I would back up important data first, just in case.


#11

CDan, you’re dead wrong about that. Snowie, you should convert those clusters to the default 4 Kb clusters with PM if you have it. The smaller the cluster size, the more your files will become fragmented. With cluster sizes that small, files will spread across the disk even more causing the head to move back and forth more than necessary. I’m surprised your PC is not sluggish as it is with those tiny clusters.


#12

You’re correct that file may be more fragmented, but only if the disc is already more fragmented than it should be. Files will not get “spread across” the disc unless you have badly fragmented free space. Nevertheless, fragmentation is a completely separate issue, not related to cluster size. If your disc is so fragmented that it makes a difference whether 4KB or 512 clusters are used, then you’ve got bigger problems. A file doesn’t get split unless the free space is fragmented. And needless to say, that’s what defragmenter tools are for.

I’ll repeat, smaller clusters do not in and of themselves cause fragmentation.


#13

When you format a volume using NTFS you have a choice of cluster size to use. This is a simple association, and knowledge of how the volume is to be used could be used to choose a cluster size more optimal than the default.

Depending on your priorities, you might want to choose a different cluster size than the default, but be careful. Choosing a smaller cluster size will waste less space but is more likely to cause fragmentation. Larger cluster sizes are less likely to cause fragmentation but will waste more space.

512 byte clusters in particular are problematic, especially since the MFT consists of records that are always 1024 bytes. It is possible on a system with 512 byte clusters to have individual MFT entries fragmented. MFT record fragmentation of this type is not possible with larger cluster sizes, which can hold one or more complete MFT Records.

If a file or directory is contiguous, the cluster size doesn’t matter, except to the extent that it wastes a small amount of space. It is therefore wise to choose a cluster size large enough discourage any more fragmentation than you are likely to encounter.

But if you know that you have a very large number of small files, or if you know that you have very few small files, you have information that you can use for a better cluster decision. Also, a very large absolute number of files (on the order of 100,000) will make fragmentation of the MFT more likely. In this case, a larger cluster size will limit the fragmentation in the MFT as it grows to accommodate.

Note that it is possible to create an NTFS volume with a cluster size greater than 4K, however, if you do that you can not use NTFS compression, nor can you get defragmentation using the built in supported Microsoft defragmentation interface.

:cool: :cool: