News and commentary from the cross-platform scripting community.
cactus Mail Starting 3/6/98

From: wesley@scripting.com (Wesley Felter);
Sent at Fri, 6 Mar 1998 23:45:04 -0600;
MS's real Java strategy

Man I hate the news... The Merc is reporting on the fact that MS is going to come out with a new Java development environment that extends the Java language with two new keywords. But nowhere in the article does it say what the keywords are! It frustrates me to see news articles that are so completely useless to anyone with technical knowledge.

From: jonathan@webware-inc.com (Jonathan Peterson);
Sent at Fri, 6 Mar 1998 17:07:58 -0800;
Re:What is The Enterprise?

If the interfaces are flattened out, they'll never break, because no vendor will have the power to break them. Right now Microsoft appears to be interested in a standard.

Microsoft pretty much killed CORBA as a standard when they ignored it in creating their own DCOM standard. XML/HTTP may be simple and easy enough to implement to cut the knees out from under the more complex COM/DCOM/CORBA/JAVABeans, etc standards. I hope you're right. A single, simple, clean method for managing distributed data and systems across multiple architectures would be a Godsend.

From: delza@voyager2.cns.ohiou.edu (Dethe Elza);
Sent at Fri, 6 Mar 1998 17:07:20 -0800;
Re:What is The Enterprise?

I'm a little confused about your terminology. You want to do RPC over HTTP via XML, but XML is a data format, not a programming language. It has hooks for script code to be embedded within it, and is designed to be accessible from ANY scripting language, but it isn't a scripting language itself.

The way XML (and HTML) are exposed to scripting languages is via the DOM (Document Object Model) which the W3C are working to standardize even as we speak:


Also, CORBA is kind of overkill for XML because the XML spec takes care of the kinds of encapsulation and platform independence that CORBA is designed for, and in a more lightweight, simple fashion. CORBA is useful for putting wrappers around legacy databases and apps so that they are accessible across platforms and languages, and also for writing industrial-strength distributed applications. For XML, Java RMI is a much simpler, lightweight distribution protocol, but anything can be used.

Sticking XML into a database makes a lot of sense, because XML is essentially a hierarchical database format in a serial form for easy distribution. A web server tied to a database can serve data in XML format very easily. Phil Greenspun, author of Database Backed Websites, is writing another book which addresses this.


Also, CORBA is not unix-only. It exists for Mac and PC worlds (and others). In fact it's whole purpose is to create an information backbone which is language, OS, and platform neutral. A CORBA program can be written in C, C++, Java, Pascal, Lisp, or Frontier and talk to any other CORBA program. Two CORBA programs can be running on different hardware, running different operating systems, but still communicate transparently.

CORBA is also supported by every major (and most minor) software vendors, at least if they have any interest in networking, making it a very independent, collaborative standard. The only major vendor which does not support CORBA is, wait for it, Microsoft, which proposes COM as the solution. COM, however, is heavily tied to the Windows platform, negating most of the benefits of CORBA. Fortunately the two *can* speak to each other via CORBA/COM gateways.

Apple Events are RPC only in the special case where everything is on one computer. Most RPC programming assumes that the program will be distributed across multiple computers, although it still works in the case where the pieces are not distributed.

Keep digging!

From: edh@cybernex.net> (Eric D. Hancock);
Sent at Fri, 6 Mar 1998 17:05:35 -0800;
Re:What is The Enterprise?

CORBA, Apple Events and COM are all RPC protocols.

I still think (respectfully) that you're missing the importance of things like CORBA and Java to (us) corporate developers.

The commercial world is so very differrent than the corporate world. CORBA is open, COM is closed. COM is a dead-end in many respects. I won't write software on such uncertain ground, because I can't afford to rewrite applications and change file formats every 2 years.

Why are so many corporate desktops still running Windows 3.1? Because they trusted the APIs to stay around for a while. Just because someone introduces something bigger and better doesn't give them a license to abandon the old technology in favor of the new. Nor does it mean I have the money to buy all new hardware. That is one of the reasons mainframes still make sense to so many companies: support.

I don't trust that COM will be around for the long haul, however nice the model is. And that goes double for the ActiveX portion of it.

From: scottwil@microsoft.com> (Scott Wiltamuth);
Sent at Fri, 6 Mar 1998 17:04:44 -0800;
Re:What is The Enterprise?

It is a big mistake to define a market based on technology or product usage. By your definition, NT can never be an important factor in running enterprise networks because NT isn't UNIX! A better definition would be based on what tasks these organizations are trying to accomplish.

From: joelong@MICROSOFT.com> (Joe Long);
Sent at Fri, 6 Mar 1998 17:03:45 -0800;
Re:What is The Enterprise?

I think this definition is fundamentally flawed. I believe and enterprise is defined by its needs, not by the solution it chooses to satisfy those needs. By your definition, an enterprise can loose its enterprise status (whatever that means) by migrating from UNIX to NT. Or even worse, by making a fundamental "middle-ware" choice like what object model to use when they decide to migrate from no object model (the current prevalent state of affairs) to using one.

Another example -- Seafirst Bank was the last major bank in the US to drop Mac clients. Once they picked up Windows clients (assuming they had Unix servers) are they now an Enterprise?

From: ehall@ehsco.com;
Sent at Fri, 6 Mar 1998 12:13:51 -0800;
Re:What is The Enterprise?

You keep talking about using HTTP as a transport for XML. Somewhere in the back of the HTTP server, database lookups and changes are being made. So there's a whole 'nother layer there that you're ignoring.

LDAP provides a generic data exchange service for clients and servers. This means you could use LDAP as a transport for XML and have the client talk to the database directly (or rather, talk to the LDAP server, which is talking to the database, which could be LDAP native, or not). The benefits here are that the LDAP protocol supports authenticated access to each individual record, provides replication and referral services among different servers and more.

This isn't really being done much today. Currently, LDAP is pretty tightly linked to "people" data in specific. There's a lot of debate going as to whether or not LDAP should be expanded beyond people data into an open, generic data exchange protocol, sort of an Internet-based, network-aware ODBC.

I'm not evangelizing; I'm looking for people who are thinking along these lines. Just curious if you were aware of these issues and your thoughts on them.

From: sbove@ravenswood.com;
Sent at Fri, 6 Mar 1998 15:30:54 -0800;
Re:"RPC & TIB"

Truly distributed apps...a way for them to call each other & interact...a way for them to push/pull data from each other across a transparent "info-space"...these are all the rage these days...everyone wants this.

I suggest thinking about this issue as two separate worlds. 1) invoking remote functions 2) publishing/subscribing-to data.

RPS is nice for the former. TIB (from http://www.tibco.com/) is tried and true for the latter.

Since interaction between applications is always "controlled", usually by the applications themselves (for obvious reasons), most of what happens between applications can be solved by having a bi-directional, real-time "information bus" to which all the appliciations are connected. Hence, my view that TIB (or something like it...see also http://www.vitria.com/) is a far more potent technology for the environment created by the Internet for complex distributed apps.

TIBCO's technology accomodates this via "Subject-based addressingTM" and a fault tolerant/guaranteed delivery flavor of UDP (which they have now mapped onto TCP for use via the Internet). When one application needs to send a certain type of data to one or many other applications it pushes that information onto the bus. When it needs something, it subscribes to the right address space (each "environment" can have its own directory of what these spaces are). All the work about what data is in each space is done up front when the data is assigned to these spaces (sorta' like a dynamic web page). The spaces are given nice english names like new.employees or data.stock.ibm. The format of any "data space" is arbitrary (could be XML, EDI, etc). TIB daemons are available for Perl, Java, C, and C++ applications.

TIB is a tried and true "Info bus" technology that has been in use on trading floors around the world for over 10 years (ever wonder how all those trading desks you see in movies like Wall Street get information to and from the traders in sub-millisecond "real-time" for millions of trades per day?)

Intel also uses TIB to coordinate thousands of automated machines that need to work together in perfect synchronicity inside their chip FABs. Each machine on the FAB floor is a TIB enabled publisher/subscriber. All machines look to the TIB for information they need to receive or send. The TIB "daemon" for any given machine is less than 100k and can be in SW or embedded.

Yahoo's stock service is on TIB. Ever wonder how that thing gets updates to 40 quotes in your portfolio in less than a second...over the internet? Ever imagine that its running on a Cray. Think again. The whole service, which handles over 15 million page views per day is on 2 (two) sparc 20s with tons of memory. The data-feeds comes in over a TIB, update a real-time database from TIB called TIC (each stock has its own "address space" on the TIB). Your queries are to memory only. Thousands of simultaneous updates and reads can be managed per second with minimal resources.

From: bdmorgan@fandago.read.tasc.com (Bryan Morgan);
Sent at Fri, 6 Mar 1998 16:21:23 -0600;
The Enterprise/Object Wars/Open Scripting

Even more than usual, your thoughts today on the Enterprise caused me to pick up the "digital pen" and reply to you. Over the last two years, I'm probably one of the rare few who have built "enterprise" (by your definition) apps using all three of the most popular distributed object standards: DCOM, CORBA, and JavaRMI. (I also write the "Distributed Objects" column for JavaWorld magazine among other things so we have the writing bug in common, by the way...)

I'd like to interject a few brief thoughts from my personal perspective:

1) First, despite the recent downturn in its popularity within our industry, I'd like to apply a little logic. If we take your definition of the Enterprise to be Windows clients talking to Unix/BigIron servers AND we believe the statistics that show nearly all Win clients talk to Win servers using DCOM, the success of CORBA to date within the Enterprise must be due to the fact that it is the best Enterprise solution (again, Win clients to Unix/BigIron servers). If you accept that fact, CORBA is in fact being mislabeled when it is referred to as a UNIX protocol.

2) Distributed object architectures such as DCOM and CORBA offer a sort of nirvana for distributed system and client/server developers. Unfortunately, I believe that the religious war being waged by the two camps (and we might as well throw in JavaSoft with RMI as a third camp) is acting to negate the two forces to reproduce a net zero effect. In other words, there is so much confusion about which technology is superior and the actual nuts and bolts are so difficult for managers to get their heads around, that most people are opting to simply choose a messier, simple solution such as CGI (in the Web case) or two-tier client/server.

Your statement about wiring CORBA to an XML interface is dead-on. CORBA objects cannot be scripted currently using any viable method (there's vaporware galore on the subject, but no standardized method). XML perhaps offers an opportunity to open up this powerful world of objects to the Web client in a new way. DCOM has its VBScript, why not unify all of the object models under XML?

Where do I sign up to help with this undertaking? I'd love to discuss it further....

From: todd@polygon.net (Todd Blanchard);
Sent at Fri, 06 Mar 1998 10:34:59 -0700;
What is The Enterprise?

Your definition of the Enterprise is too narrow.

I make my living writing software for the enterprise. If Unix and Windows was all there was life would be really easy. It's not.

The Enterprise is better defined as the hodgepodge of systems acquired over time running a mix of purchased and proprietary mission critical software developed internally by the local IS department. it's the entire computer infrastructure of a company. In the larger companies like telcos, you will find old mainframes running applications written in the 60's in assembly language or COBOL or FORTRAN that nobody fully understands anymore, being accessed from Unix or Windows or Macs or minis like AS/400s - often using screen scraping technologies for the interapplication communications. Sometimes using proprietary gateways. Almost nothing is ever replaced, or thrown away.

Network topology is usually a mini-internet. A whole host of different networks tied together by gateways. Much data communication among systems and applications is still performed in batches using scheduled processes. Sometimes legacy systems can be wrapped up like an object behind CORBA interfaces and accessed in near-real time.

Windows NT has little hope of invading this environment in any significant way in the near term.

These systems have extremely high availability requirements and are often not rebooted more than once or twice a year. It will take over 10 years of burn-in for NT to gain the trust of people who operate these kinds of systems. Serious IT directors will not trust their critical processes to such a young upstart. Unix is itself a relatively new entry to this computing circle. Unix has good reliability and has been shown to scale up well to mainframe class machines. It also has 20 years of burn-in time. Only in the last 5 years or so has Unix achieved the respect and trust of these IT directors.

For Microsoft to be taken seriously in the Enterprise, they must improve quality. A lot. NT uptimes must approach 1 year without rebooting. Hardware must be scalable to 8 or more processors. Network throughput still lags. Unfortunately, Microsoft seems more intent on crushing competitors than in improving their products. So far their strategy for invading the enterprise has been more about buying business through "porting grants" and less about providing the tools and stability the enterprise needs. Until that changes, they won't be taken seriously.

Meanwhile, IBM is now providing Java on the Mainframe and the Minis which is helping a lot in tying things together. Java is slowly becoming a common language in the enterprise because it's portable, computationalresources can move from machine to machine, and Java makes networking among diverse machine types relatively easy. It also preserves earlier hardware and software investments. By writing Java interfaces to legacy systems on legacy machines, Java extends the flexibility and lifespan of these systems. it's an easy sell. And if the enterprise ends up using Java to tie all the systems together, platform type in general and NT in particular become increasingly irrelevant.

This page was last built on Tuesday, April 7, 1998 at 6:12:00 PM, with Frontier version 5.0.1. Mail to: dave@scripting.com. © copyright 1997-98 UserLand Software.