Here’s Michi’s comment to my last post with the formatting preserved:
Savas, my five cents worth, as usual…
> We are even starting seeing hardware products to help with XML processing.
Now, *that’s* really quite something. Translation: “The processing overheads of SOAP are so staggeringly high that the only way to get acceptable performance is to build custom hardware.”
This is typical of so much of what happens in this industry. First, we adopt a protocol that incredibly inefficient and neglects just about everything we’ve learned about protocol design in the past three decades. Then we widely adopt this protocol, common sense notwithstanding. Then we publish articles to state that 140 RPCs per second is “not slow.” And, finally, when someone such as myself or Marc point out that 140 RPCs per second are not only slow, but atrociously so, we counter by saying:
“We are even starting seeing hardware products to help with XML processing.”
Amazing.
Savas, you ask:
> How many hardware manufacturers have got into the ICE protocol processing market?
Of course, the answer to that question is zero. Why? Because Ice, even without hardware support, is tens or hundreds of times faster than SOAP already. So fast, in fact, that it easily saturates a 100Mbps network, and achieves around 500Mbps over the backplane. No need for hardware here, obviously.
> Microsoft is betting on SOAP and made it a key part of its distributed computing platform, not DCOM.
Betting on SOAP? Hmmm… .NET remoting does not use SOAP. It uses a binary protocol for performance reasons. So, I’m not sure that Microsoft are “betting on SOAP”. They certainly are not for their .NET remoting protocol. And DCOM failed because it could not be made to scale, due to its misguided garbage collection idea. And because DCOM, amazing as that may sound, was even more complex than CORBA.
> Nowadays, issues related to security, policies, contracts, agreements, reliability, > metadata processing, semantics, etc. require us to come up with new paradigms for > building distributed systems. These paradigms require new models, new protocols, new tools.
Absolutely. I don’t see how an botched protocol such as SOAP as the transport would in any way contribute towards solving these issues.
> It seems that Michi still believes that RPC is the way to build distributed systems.
Yes, of course. In fact, I am not aware how I could build a distributed system without RPC. After all, the messages have to somehow move between client and server, I would think. And WS is no different here: a SOAP interaction sends some data to an endpoint which performs some processing on the data, and returns some results. If that’s not RPC, I don’t what is. WS *is* RPC, by definition. And everything else is simply a matter of granularity-I can implement coarse-grained models using Ice or CORBA as well as I can with WS.
By all means, let people build document-centric applications, I have no problem with that. But to tell them repeatedly that it is perfectly OK to do so on top of a hopeless transport and, worse, to tell them that it doesn’t matter that the transport is hopeless, is, at the very least, irresponsible, in my opinion.
> The strength of XML and SOAP is not in their performance advantages but in their wide adoption and tooling support.
Jim’s blog entry was about performance, and Marc’s and my replies were about performance. Jim said that “SOAP is not slow” and we provided some figures to show that this is not a sustainable statement. Why are you bringing up a completely new topic now? Possibly because there is no way that SOAP can possibly win on performance?
Agreed, standardization is useful. Especially so when what is being standardized is high-quality technology. For WS, the verdict on the quality of the technology is still outstanding. And, please, don’t cite wide adoption as a counter-argument here. We all know that wide adoption is not necessarily an indication of quality. Marketing, hype, fashion, and “run with the herd” behavior have usually a lot more to do with wide adoption than technical merit.
See "BrainExpanded - Introduction" for context on this post. Notes and links Over the years,…
This is the first post, in what I think is going to be a series,…
Back in February, I shared the results of some initial experimentation with a digital twin.…
I am embarking on a side project that involves memory and multimodal understanding for an…
I was in Toronto, Canada. I'm on the flight back home now. The trip was…