Exciting technology

What up and coming tech has your interest?

  • Claw Hammer

    Votes: 0 0.0%
  • P4

    Votes: 0 0.0%
  • Duallies (Either AMD or P4)

    Votes: 0 0.0%
  • Big 3 Graphics (NV30, R300, Matrox G1000)

    Votes: 0 0.0%
  • Other Graphics (Creative 3D labs, Kyro, Trident, SiS)

    Votes: 0 0.0%
  • DDR-2 (Next Jan or so they say)

    Votes: 0 0.0%
  • Storage (please elaborate)

    Votes: 0 0.0%
  • Faster current memory (DDR or RDRAM)

    Votes: 0 0.0%
  • No thanks, happy with what I've got

    Votes: 0 0.0%

  • Total voters
    0

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
Damn, that wasn't the quote I was thinking about.

Here is the exact quote:
I saw this in a movie about a bus that had to _speed_ around the city,
keeping its _speed_ over fifty. And if its _speed_ dropped, the bus
would explode! I think it was called... "The bus that couldn't slow
down."
-- So close, yet so far, Homer, "The Springfield Files"

Here is what I was thinking of:
Skinner: Well...maybe it was for the best. Now I...I finally have time
to do what I've always wanted: write the great American novel.
Mine is about a futuristic amusement park where dinosaurs are
brought to life through advanced cloning techniques. I call
it "Billy and the Cloneasaurus."
 

Platform

Learning Storage Performance
Joined
May 10, 2002
Messages
234
Location
Rack 294, Pos. 10
. . · .
Cliptin said:
The reason I grouped my choices the way I did, SATA and solid state storage together, was for two reasons.

Solid State: Don't expect a lot anytime soon in this department (i.e. -- something affordable for the masses). There simply isn't anything on the horizon to push these technologies beyond just periodic refinement of small low-power storage devices for portable use. Exotic solid-state devices such as a sugar-cube-sized device based on optical technology remain a laboratory experiment. One of the strangest high-capacity storage technologies ever to come along was protein-based (a.k.a. -- "organic storage"). That also is still pie-in-the-sky technology.


One storage technology that is finally beginning to creep out the doors of a few companies very very painfully slowly now is multi-layer fluorescent optical disc and tape technology -- also called by various other names such as holographic storage technology or 3-D optical storage technology. Both of these WORM media will eventually extreme data capacity and high data read rates with excellent archival qualities. The disc product will start off with well over 100 GB capacity, maybe as much as 200 GB initially, and have a read speed of around 40 MB/s. The tape product will have the highest volumetric data capacity of any medium available, starting off with 1 TB or a bit more storage capacity per 8mm/AIT-sized optical tape cartridge with a read speed of 40 MB/s.


HD-ROM is another WORM (disc) technology that could be released now if the company (Norsam) had the cash to perfect a commercial digital product, though IBM has been an investor for a while now. The data capacity of HD-ROM will be pretty high -- equal to or slightly greater than first-generation holographic disc storage technology -- but the medium's archival quality will be next to none, as in having a potential million or more years of stable shelf life and/or fireproof storage of data. They currently offer an analogue product called HD-Rosetta using the same ion beam writer technology that HD-ROM uses, where incredible numbers of monochrome images can be stored on a CD-ROM-sized disc -- essentially a replacement for microfilm. HD-Rosetta data is read with an electron microscope. HD-ROM will use a conventional-looking CD-like reader.


Blue-laser DVD will emerge next year with 20 ~ 25 GB capacity.

These two technologies are interesting enough by themselves. When combined, I see a progression toward smaller fast computers.

Back to SATA: One thing else is that there will be an introduction in lower operating voltages with SATA. Computer system makers want to reduce power and voltage requirements so as to reduce power supply sizes. SATA will help them do this.


PS Platform, Are the two white dots at the end of your message supposed to mean something?

I'm signaling certain people across the world.

. . · .
 

Platform

Learning Storage Performance
Joined
May 10, 2002
Messages
234
Location
Rack 294, Pos. 10
. . · .
Bozo said:
...We also have lickity-split memory, and plenty of it.

I'll agree with "plenty of it," however I disagree with the preceding "lickity-split" part. Primary memory is now lagging WAY behind in the speed department as far as integration goes and it continues to trail ever further as new faster processors come out monthly. Processor speed has kept marching away from primary memory speed for over 2 decades now.


Back in the dark ages of computing (where I come from) we had "fast" static electronic memory in mainframes and mini-computers. In fact, the whole damn system ran at the same clock rate as the processor -- primary memory and data channels included. Secondary and tertiary storage (external) was, of course, designed to operate independently of the processor clock. By the way, I'm well aware WHY we are where we are today with primary memory speeds, it's just too bad we ended up running out of economical solutions. Nobody but supercomputer users are going to be able to afford wide and deeply-interleaved primary memory subsystems and/or high-capacity memory systems built to use static RAM.


Even with the widening disparity of RAM speed - to - processor speeds, PC makers are doing an excellent job in value engineering. We are now seeing the front-side bus speed finally break the half-gigahertz mark (533 MHz). Memory devices will be available in less than a year to operate at 75% of the newest front-side bus speed.


Anything on the horizon to alleviate our current problems? Yes: Magnetic RAM. M-RAM is a bit of a throwback to core memory, where a tiny magnetic donut stored a bit. The upcoming (2005/6?) M-RAM will allegedly be pretty fast and dense.


...Maybe not having to release a new BIOS or chipset drivers every week or two, to fix another, and another, and another problem...

I don't experience that, since I'm running systems based on Supermicro mobos. :^)




. . · .
 

Platform

Learning Storage Performance
Joined
May 10, 2002
Messages
234
Location
Rack 294, Pos. 10
. . · .
Barry K. Nathan said:
Of course, nobody is going to have ATA drives with STRs anywhere near that fast anytime soon, and PCI cards are going to be of little help on this front because most people still have 32-bit 33MHz, 133MB/s (if you're lucky -- some chipsets have bugs that cripple the speed to various limits like 75MB/s or 90MB/s) buses.
Channel bandwidth is not just about data, there is also command overhead not to mention the small bits of time where nothing happens during command and data transmissions -- all of this figures into the total bandwidth of the channel. On the outer tracks, platter media rates can now saturate an ATA-66 interface, so we are getting there. If IBM's "Pixie Dust" platters ever become a reality, we would be able to saturate an ATA-133 channel.


48-bit LBA addressing is at least a proposed standard, if not an approved one (I don't feel like checking www.t13.org to see which it is). It's not proprietary. I have no idea if ATA133 is proprietary or not, however.

ATA133 is Maxtor's proprietary standard. Even though the Maxtor interface's 133 MB/s channel bandwidth is a bit of snake oil in respect to current ATA hard drives, it's just part of the marketing attempt to woo people over with high capacity needs -- as in up to 160 GB worth of capacity. At least cache and buffer performance will be enhanced with ATA133.


48-bit LBA addressing has long since been approved by ANSI as the "next" standard for addressing numbers of sectors, it's just that the overall standard ATA/ATAPI-6 standard has not approved quite yet.


===============================================
The SATA command set is enhanced to provide SCSI-like capabilities. So, even at an equivalent channel speed SATA will still provide more real-world throughput because of its efficiency.
===============================================
What do you mean by this? If you're talking about Tagged Command Queueing, then IBM (75GXP and later) and Western Digital (some WD1200BB's, and possibly other models) PATA drives already support this, and it's just driver support in Windows, Linux, etc. that's lacking. (BTW, there's an experimental patch that adds support to Linux for TCQ on these drives.)

Yes I'm talking about the same thing you are (tagged command queuing, et al), except that SATA has official documented support for these advanced capabilities. I believe the IBM 34GXP was the first ATA hard drive that supported tagged command queuing.


...I think you're going to need new drivers for the "SCSI-like capabilities" you mention above, though...
True, if you want to take full advantage of the advanced capabilities that SATA will offer, but you will not need to change operating system drivers to use a SATA device. There will be a new ATA BIOS with SATA controllers that will take care of interfacing SATA storage devices to the system. Your existing ATA device driver will work with the new ATA BIOS.


...Are these SATA RAID host adaptors going to be the RAID equivalent of WinModems, like most previous ATA RAID host adaptors, or are they going to mostly be real RAID cards?

All I can tell you is that the data channel will change. The quality and functionality of a RAID controller -- SCSI or ATA -- depends on the manufacturer's design capability. I'll venture to guess that we'll see an explosion of RAID devices that use SATA hard drives. These will be host adaptors and external devices that are attached via Fibre Channel, SCSI, and iSCSI. SATA won't be able to grab much at all of the higher-end of the RAID market, because it may take a while before 10kRPM and 15kRPM SATA hard drive mechanisms show up (don't discount the possibility of 12kRPM drives). But, high capacity RAID boxes using SATA hard drive mechanisms WILL BE A GIVEN.


===============================================
Basically speaking, SATA will both evolutionise and revolutionise common storage as we know it, and do so rather swiftly. Once people experience how good SATA technology will be, I'm sure nobody in their right mind will want to go back to crappy stuck-in-first-gear Parallel ATA technology and its cursed broad grey airflow killing cabling.
===============================================
No argument there; the SATA cables would be revolutionary [perhaps that's the wrong word, but hopefully the meaning gets across] enough even if it was otherwise unmodified PATA running on them. In fact, that's what I see as the biggest advantage of SATA. (Unless there's other stuff that I'm unaware of, especially regarding SCSI-like capabilities, the other aspects of SATA don't seem particularly important to me based on my current knowledge of them.)

SATA cabling will be evolutionary, since you will have a SINGLE thin cable supplying both the data connection and the power to the SATA device.

The revolutionary aspect of SATA is that it will kill off PATA (or PiTA PATA) technology in any new system rolling off the assembly line several months from now.


. . · .
 

Barry K. Nathan

What is this storage?
Joined
Feb 9, 2002
Messages
42
Location
Irvine, CA
Platform said:
. . · .
Barry K. Nathan said:
Of course, nobody is going to have ATA drives with STRs anywhere near that fast anytime soon, and PCI cards are going to be of little help on this front because most people still have 32-bit 33MHz, 133MB/s (if you're lucky -- some chipsets have bugs that cripple the speed to various limits like 75MB/s or 90MB/s) buses.
Channel bandwidth is not just about data, there is also command overhead not to mention the small bits of time where nothing happens during command and data transmissions -- all of this figures into the total bandwidth of the channel.
I'm fully aware of this, and BTW it applies to the PCI bus as well as to the ATA channel; that's another thing I meant to allude to with the phrase "if you're lucky."

On the outer tracks, platter media rates can now saturate an ATA-66 interface, so we are getting there. If IBM's "Pixie Dust" platters ever become a reality, we would be able to saturate an ATA-133 channel.
The 120GXP uses those "Pixie Dust" platters already, according to IBM's web site, and the STR is nowhere near maxing out ATA-100 -- never mind ATA-133.

That is not to say that ATA-100 won't be maxed out in the (somewhat) near future, just that Pixie Dust alone is not sufficient.
 

James

Storage is cool
Joined
Jan 24, 2002
Messages
844
Location
Sydney, Australia
Bozo said:
How about software that's compatable with someone elses software. (did you know that XP will not 'see' anything running Samba by design?)
Well, I got it working at home. I have 2 XP desktops, an XP laptop, an Turtle Beach Audiotron, and a Win95 web tablet that all talk fine to a Samba server I have running on a P166 FreeBSD box (Samba 2.2.3a). It was working fine under Solaris on an Ultra 5 (Samba 2.2.2) too.

The Samba box even shares different partitions depending on who it is that logs in.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,268
Location
I am omnipresent
James said:
Well, I got it working at home. I have 2 XP desktops, an XP laptop, an Turtle Beach Audiotron, and a Win95 web tablet that all talk fine to a Samba server I have running on a P166 FreeBSD box (Samba 2.2.3a). It was working fine under Solaris on an Ultra 5 (Samba 2.2.2) too.

The Samba box even shares different partitions depending on who it is that logs in.


James, I'd still like to know what you did that I didn't. I tried precompiled. I tried rolling my own. I used a default smb.conf and one I made by hand. I tried three different versions of samba even. I never got so much as a connection.
 

Cliptin

Wannabe Storage Freak
Joined
Jan 22, 2002
Messages
1,206
Location
St. Elmo, TN
Website
www.whstrain.us
There is no technical reason not ot increase an ATA drives disk cache to 64M or 128M really. If they can write firmware algorithms to take advantage then a highbandwidth interface becomes more important.
 

i

Wannabe Storage Freak
Joined
Feb 10, 2002
Messages
1,080
Cliptin said:
There is no technical reason not ot increase an ATA drives disk cache to 64M or 128M really. If they can write firmware algorithms to take advantage then a highbandwidth interface becomes more important.

I don't understand much about the internal workings of hard disks, so here goes: wouldn't increasing the on-disk buffer to something that high increase the chances of serious data loss after a power failure?

You say, "not really ... the operating system could just as easily be caching 64 Mb worth of data." To which I reply, "but what about a journaling file system?" If it's the OS that's doing the caching, at least it has the chance to manage the journal information such that data loss will be minimized. But if you put that cache on the brainless disk, well ... you're screwed.

Am I close?
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
James, I would also like the details of how you got XP to talk to a Samba box.

Bozo :D
 

Cliptin

Wannabe Storage Freak
Joined
Jan 22, 2002
Messages
1,206
Location
St. Elmo, TN
Website
www.whstrain.us
i said:
wouldn't increasing the on-disk buffer to something that high increase the chances of serious data loss after a power failure?

On the purely hardware side, It depends on how much is dedicated to reads versus writes. If you only use as much for writes as is used currently, then the chance is no greater.
 

James

Storage is cool
Joined
Jan 24, 2002
Messages
844
Location
Sydney, Australia
I'm famous! :mrgrn:

It must have been something that I did to the XP side, since as I said I have just changed both Samba versions and indeed server system architectures and I didn't have to change anything on the server end.

Once I get home and I'm in front of my notes (I keep a journal when I do computer work, so I can see what I have and haven't tried), I'll post a full explanation.
 

James

Storage is cool
Joined
Jan 24, 2002
Messages
844
Location
Sydney, Australia
Cliptin said:
i said:
wouldn't increasing the on-disk buffer to something that high increase the chances of serious data loss after a power failure?

On the purely hardware side, It depends on how much is dedicated to reads versus writes. If you only use as much for writes as is used currently, then the chance is no greater.
Hard drive buffers are always read, never write. Loss of power wouldn't affect anything; the OS caches are usually read as well, except for certain applications (like databases) which have write caches as well. In this case the write is usually done through a three phase commit, so there's very little chance of data loss there.

The only other bit of cache in the disk subsystem tends to be the write cache on the RAID controller. Because loss of power in that case does have a strong chance of buggering up your file system, the write cache is almost always backed up by a battery (flash RAM isn't used because it's more expensive and slower to write to). When power is restored, the cache writes all the pending data to the disks before any further operations are allowed on the array.
 

Cliptin

Wannabe Storage Freak
Joined
Jan 22, 2002
Messages
1,206
Location
St. Elmo, TN
Website
www.whstrain.us
James said:
Hard drive buffers are always read, never write.

I don't think that is true. Remember the win98 write cache/fast shutdown problems. Additionally, with journaling filesystems it would be stupid not to implement write caching in the OS. You get protection and some performance benefit.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
I agree Cliptin: When setting up a RAID controller, I have seen an option to enable or disable the write cache on the hard drive. There is a warning that the data could be lost if there is no UPS attached to the computer.

Also, in Win2k, there is a check box to disable write caching for a hard drive in Device Manager.

Bozo :D
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Bozo said:
I agree Cliptin: When setting up a RAID controller, I have seen an option to enable or disable the write cache on the hard drive. There is a warning that the data could be lost if there is no UPS attached to the computer.

Also, in Win2k, there is a check box to disable write caching for a hard drive in Device Manager.

Bozo :D

That write cache would be provided by win2k I imagine.
 

Cliptin

Wannabe Storage Freak
Joined
Jan 22, 2002
Messages
1,206
Location
St. Elmo, TN
Website
www.whstrain.us
Bozo said:
I agree Cliptin: When setting up a RAID controller, I have seen an option to enable or disable the write cache on the hard drive. There is a warning that the data could be lost if there is no UPS attached to the computer.

Also, in Win2k, there is a check box to disable write caching for a hard drive in Device Manager.

Bozo :D

Then again Bozo, Do you think that setting was being applied to the controller or to the drive itself.

In looking back over what I wrote in my last post, everything I said applies to the OS even though in my head I was thinking firmware.
Thanks Pradeep.
 

Prof.Wizard

Wannabe Storage Freak
Joined
Jan 26, 2002
Messages
1,460
IMO there's no more exciting news release than the release of a brand new processor... That's why I voted for the ClawHammer.

Let's bear it: computers were brought to calculate vast amounts of data that humans were unable to do in a reasonable amount of time. CougTek is right on this one. I believe that all other advancements are just mere subsystem upgrades: and this includes storage, memory, and video adapters.

ClawHammer will shake a lot the CPU waters if it'll deliver what the AMD guys preach... 64bit-computing while retaining compatibility for 32bit legacy applications is no small deal IMHO. This combined with a bunch of other nice techs provided by AMD shall boost the overall abilities of current PCs to much higher levels.

All other advances come next...
 

Tea

Storage? I am Storage!
Joined
Jan 15, 2002
Messages
3,749
Location
27a No Fixed Address, Oz.
Website
www.redhill.net.au
And what will its magnificent 64-bitness actually do? Oh sure, it will no doubt be faster for floating-point intensive stuff, about which I don't really care anyway, but overall, the majoe bottle necks in computing are on the I/O side, not the processing side anyway.

That's why I voted for 10,000 RPM IDE. More real-world significance to the average user.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,268
Location
I am omnipresent
64bit on the desktop is probably going to have to wait to find an application. I think I wrote something earlier in the thread about it. I'm sure Tannin remembers this clearly:
When OS/2, NT and Windows95 were released, it was with great fanfare and a promise of a "faster" computing; imagine actually using the 32bit CPU you've had for years with fully (OK, mostly) 32bit software.

Of course, the first thing just about every joe-average user said after installing OS/2 or Win95 is "Gee, my computer doesn't really seem faster." And if they installed NT they said "Gee, how come my computer is so slow all of a sudden."

And so it will be again.

10k IDE has been examined to death and it DOES matter. Slow memory hurts, too. Lack of I/O subsystem improvements are starving our PCs of data to actually USE the CPUs we have now. What's the P4 on now, a 15x multiplier? That's an awful lot of waiting around.

I think the technology that would excite me the most today (insert bad joke here) is probably widespread adoption of a reliable software environment. I won't say *nix on the desktop, but if I never have to reinstall Win98 or apply 16 IE security patches again I'll be much happier.
 

Buck

Storage? I am Storage!
Joined
Feb 22, 2002
Messages
4,514
Location
Blurry.
Website
www.hlmcompany.com
Platform said:
. . · .
48-bit LBA addressing is at least a proposed standard, if not an approved one (I don't feel like checking www.t13.org to see which it is). It's not proprietary. I have no idea if ATA133 is proprietary or not, however.

ATA133 is Maxtor's proprietary standard. Even though the Maxtor interface's 133 MB/s channel bandwidth is a bit of snake oil in respect to current ATA hard drives, it's just part of the marketing attempt to woo people over with high capacity needs -- as in up to 160 GB worth of capacity. At least cache and buffer performance will be enhanced with ATA133.


48-bit LBA addressing has long since been approved by ANSI as the "next" standard for addressing numbers of sectors, it's just that the overall standard ATA/ATAPI-6 standard has not approved quite yet.
. . · .

Neither ATA/ATAPI-6 (T13 Project 1410D (UDMA100)) or ATA/ATAPI-7 (T13 Project 1532D (UDMA133)) have been approved. However, both proposals address 48-bit LBA (E00101R6). Presently, ATA/ATAPI-6 is in draft 2b. The ATA/ATAPI-7 proposal may be delayed since the T13 Committee is in the process of taking over the Serial ATA 1.0 specification. It will be interesting to see how the industry transitions over into these soon to be established standards.
 

Prof.Wizard

Wannabe Storage Freak
Joined
Jan 26, 2002
Messages
1,460
Mercutio said:
64bit on the desktop is probably going to have to wait to find an application.
I doubt that. Due to marketing reasons most software houses will start providing 64bit versions of their programs even if the there's no big performance gain.

IMO: 64bit apps will reach our PCs much sooner than you think...
 

cas

Learning Storage Performance
Joined
May 14, 2002
Messages
111
Location
Pittsburgh, PA
64bit computing is really only of interest to applications with data sets greater than ~30bits. Recognize however, that you certainly don’t need 4GB of physical ram to enjoy benefits from a 64bit cpu.

As I mentioned on SR, programming the x86 was only a hassle when the data or code couldn’t be referenced from a single gpr. Writing a 64k program with a 16bit cpu is very clean. So too, a 4G program with a 32bit cpu, and so on. Since modern operating systems and processors slice the available address space in a number of ways, only 30-31bits are really available in most 32bit machines, and roughly 40bits in many 64bit machines.

I agree wholly with Mercutio’s suggestion that the move to 64bits will be like the move to 32bits. Just as before, some programs will actually be slower. After all, only half as many 64bit pointers will fit in the cache hierarchy.

For programs with inherently large datasets however, widespread 64bit computing is long overdue. Roughly seven years ago I developed an AVL system for tracking 18 wheelers and other vehicles within the US. Early versions of the display engine just memory mapped the street database. When we moved from state to country maps however, we had to remap views dynamically. This made the code larger, slower, and more error prone.

It’s no panacea, and it certainly won’t double the speed of your processor, but I am looking forward to a 64bit machine on my desk.
 

cas

Learning Storage Performance
Joined
May 14, 2002
Messages
111
Location
Pittsburgh, PA
Just to qualify my statement above, there are some applications that roughly double on a 64bit machine. Encryption is a good example, although some of these benefits may already has been realized with the x86 SIMD units.

Floating point is not. IEEE 754, 64bit floating point has been available since the introduction of the 8087.
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
64-bit won't double the speed of your computer. In fact in a lot of cases, it might slow things down - as cas mentioned, if you start using a lot of 64-bit data constructs you can only fit half as many of them in cache - but then again, the cache of clawhammer is expected to double in size. Coincidence?

If-Then-Else, possibly the most widely used construct in programmes won't show any difference either way.

DB and string comparisons will benefit enormously - so your word processor will run faster - w00t!

Where hammer will shine though is that in 64-bit mode, there are an extra 8 GP registers to play with - which makes the compilers job of shuffling data around MUCH easier. If the compilers are any good, re-compiled code should show a significant gain.
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
Somehow I can't see Intel's compilers supporting x86-64 just yet. :) I was specifically referring to MS and gcc...
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
Cliptin: On the RAID setup, it was definately the hard drive that was being changed. I have since setup another server using two Seagate X15s. This adapter had an option to do "Write Through" or "Write Back". According to the help file, one (I forget which now) was not recomended as the data was stored in the hard drives memory and could be lost. Seagate also has a utility to change settings on their hard drives, but it doesn't work through a RAID card.

As far as Win2k is concerned; who knows? I did some searching for information, but never really found anything.

Bozo :D
 

Splash

Learning Storage Performance
Joined
Apr 2, 2002
Messages
235
Location
Seaworld
. ..
Groltz said:
And you're now breaking in at least your 4th new user name???

Correct. Not counting my inescapable "Ghost In The Machine" everpresence, there are only four of me galavanting about the virtual realm these days.

corvair2.jpg
splash2.jpg
giant2.jpg
platform.gif

. .CORVAIR. . . . .SPLASH. . ... . .GIANT. . . . . PLATFORM


As for iGary®, he went down with the HMS StorageReview when it had that devastating collision
with an unseen iceberg on that fateful week back in December of 2001. :(



. ..
 

James

Storage is cool
Joined
Jan 24, 2002
Messages
844
Location
Sydney, Australia
James said:
I'm famous! :mrgrn:

It must have been something that I did to the XP side, since as I said I have just changed both Samba versions and indeed server system architectures and I didn't have to change anything on the server end.

Once I get home and I'm in front of my notes (I keep a journal when I do computer work, so I can see what I have and haven't tried), I'll post a full explanation.
So I've been lazy.

I suspect the below won't help everyone, since my problem was more getting the XP box to remember the password to attach to the share, rather than it being a problem with XP specifically.

I've reviewed my notes, and it seems most of my problems came from the smb.conf file rather than anything else. I've since installed 2.2.3a's standard conf file and it all works fine, so nothing special there.

First I turned off encryption of passwords on the XP boxes. There's a registry flag to do this and it's identified in the Samba documentation.

I then tested attaching to the Samba share from the server itself (ie. my Sun box) with the smbclient program. That worked fine. Then I tried net use from the XP command line and that worked too.

Then I spent, according to my notes, four straight hours futzing around with the smb.conf, Solaris and XP in general trying to get the XP boxes to consistently attach to the shares without asking me for a password (XP seems to suffer from amnesia). Two days later I gave up on it and wrote a small DOS batch file and put it in the startup folder of each user :

net use g: \\server\share\dir password /USER=user /PERSISTENT=no

... because I'm stuffed if I'm going to bugger around in the GUI when a DOS script will do the job.

Admittedly, I now have a laptop running XP using a wireless connection which won't attach to the Samba shares - "this client does not have permission to connect" - which I need to investigate further.
 
Top