Comparison
Home Up Tip of Iceberg GroupWare Comparison

 

This one took several posts to compare Abater with one of the most popular Y2K remediation tools.  Both Abater and a Y2K remediation tool are required.  Ted (whose name has been changed here) was the President and CEO of the tool manufacturer.  Bookmarks are provided to the page below:

Initial posting by Ted: Ted's tool is great for assessment of large networks (without Abater)
Response by Vic Fanberg: Ted's tool lacks one important quality, being able to properly prioritize
Follow-up posting by Ted:   Ted's tool triages by last save date
Response by Vic Fanberg: Triaging by date can result in thousands of times more effort in Y2K remediation
Posting by Clint: Vic is going beyond what the customer asked
Response by Vic Fanberg:   Abater is required to create an efficient remediation plan (which is an output of the impact assessment process)

Initial posting

> Here's a quiz: You're the administrator of an enterprise network with 250
> servers and 15,000 PCs at 150 locations spread over an entire state.
> You have one week to check every single PC to ensure they're
> Y2K-ready.  What do you do?

> Click here: <link provided to press release about Ted's tool>

As I understand your tool, what it does for spreadsheets is to identify
problem cells that may have Y2K issues. Your tool is an assessment
tool, but it lacks one key element of a true assessment. When we got
done with assessment on the mainframe, we wanted to know what groups of
files we should remediate together for the most efficiency. What help
does your tool provide at helping the administrator know what order to
remediate the spreadsheets in? It makes a big difference whether you
sequentially remediate each PC or you do some high level planning first.

IMHO, many spreadsheets in a corporation have a lot in common.
Spreadsheets are shared between users. One spreadsheet is used as the
basis for another as an update. How many times do you expect the
administrator to remediate the same spreadsheet in his Y2K efforts?
Using your tool alone, you leave the administrator no choice but to
remediate identical spreadsheets multiple times - every time a copy is
encountered. Every time he remediates a spreadsheet similar to one he
has already done, he starts from scratch - not using any data he has
accumulated from the similar spreadsheets. That is a big wasted effort
on the administrator's part.

Follow-up posting

> Our standard <tool> lists all data files by edit date so that you can triage
> by age.

That is unfortunate as it is problematic to triage only by last save date.

As you know, most of the operating systems your product will run on give
you three dates: creation date, last saved date and last access date.
If you have a file that is used constantly for lookup, but rarely
modified, it will likely have a very old date and could place your most
critical files at the wrong end of the list.

Furthermore, last saved date may randomly be updated, particularly in
Excel spreadsheets. If a user opens a spreadsheet, moves the cursor and
closes the spreadsheet, he will be presented the opportunity to alter
the last saved date when he exits. Some users will not update it
because they know no changes were made, others will accept the default
to save it.

In a future version, you may want to consider giving people the OPTION
for sorting by last accessed date. Even that is not perfect since many
of the dates tend to get lost in email systems, backup programs, packing
in compressed files, etc.

Finally, consider a single spreadsheet that was distributed yesterday
(or last week or last month) to all 15,000 employees. Triaging by date
only says that when the remediator gets down to that date, then he
will individually remediate the same spreadsheet 15,000 times. Even if
he realizes all 15,000 are probably identical, he has no choice because,
for most, he doesn't know which ones might have been changed. IMHO, it
would be much better to remediate the single spreadsheet once and
distribute 15,000 copies to the users because you know from the
assessment that the spreadsheets are all identical.

Triaging only by last save date only is a terrible idea for this large
system. Your tool has done him a dis-service in that area by not
providing him the best assessment information for such a large
organization.

Posting by Clint

>it looks as if you're trying to do more than the customer wanted.

Not really. My model for Y2K was something like: (1) do an impact
assessment, (2) based upon the impact assessment do the remediation, (3)
unit level testing, (4) system testing, (5) certification, and (6)
interface testing. There is no step between impact assessment and
remediation. One output of the impact assessment is planning for what
order to do the remediation.

The customer is using Ted's tool to do an impact assessment. Ted's
tool claims to handle all 5 layers of the assessment (hardware,
software, data, etc.) With a properly designed tool you can pump a lot
more useful information out of the assessment then Ted is doing.

We have a spreadsheet assessment tool that is an add-on to all others
tools (it is not specific to Y2K), so specific Y2K tools are required to
use for the actual remediation, but our add-on tool is much better for
the assessment. The tool I wrote identifies duplicate files based upon
the internal file structure (ignoring the file name and date/time
stamps).

What I have always wanted to come out of an impact assessment with is
the best understanding of what the problems are, where (to the file
level) they are, and what is the most efficient way to handle them. In
our mainframe impact assessment we divided the files by what types. As
I remember, on the mainframe, we had categories of no date references,
simple references (date MOVEs, etc.) and complex references (date
comparisons, date arithmetic, etc.).

There has already been another email exchange between Ted & I sent to
the discussion group. Ted said in his response that he triages by file
modified date. I hope Amy sees fit to forward them. Just in case she
doesn't - here is the last paragraph or so of my next email -

<Snip>
Finally, consider a single spreadsheet that was distributed yesterday
(or last week or last month) to all 15,000 employees. Triaging by date
only says that when the remediator gets down to that date, then he
will individually remediate the same spreadsheet 15,000 times. Even if
he realizes all 15,000 are probably identical, he has no choice because,
for most, he doesn't know which ones might have been changed. IMHO, it
would be much better to remediate the single spreadsheet once and
distribute 15,000 copies to the users because you know from the
assessment that the spreadsheets are all identical.

Triaging only by last save date only is a terrible idea for this large
system. Your tool has done him a dis-service in that area by not
providing him the best assessment information for such a large
organization.
<End Snip>

Would you be satisfied with merely a list of which files have date problems
sorted by last modified date when you could have obtained the same
information with notations of which files were compared and found
exactly identical (so you only had to fix it once, rather than 15,000
times) or similar (so you could transfer knowledge of one spreadsheet
to all the similar ones)? If you are satisfied with Ted's method, you
may do 15,000 times more work than someone using a better impact
assessment method. Then again, you may get rich if you are billing by
the hour<G>.