Web Services are RPC?

Continuing the exchange of comments with Michi Henning here are some more thoughts.

Michi argues that “WS is *RPC*, by definition” . This is a definition that I haven’t come across. I personally see a big difference between a request-response message exchange pattern and a Remote Procedure Call. The SOAP processing model and the focus on messaging allow us to move beyond RPCs/RMIs and do so much more. Jim and I have tried to illustrate some of the thinking and possibilities with MEST and SSDL. I definitely don’t equate Web Services and SOAP with RPCs. This topic has been exhausted and there are so many smart people out there telling us that RPC systems are not good for inter-organisation and scalable solutions or even for distributed computing (e.g. Waldo’s paper ). There is nothing RPC-like in the SOAP processing model. Yes, SOAP can be (mis)used as a way to implement RPC systems but that’s not inherent in the protocol. Which reminds me… we don’t treat SOAP as a transport protocol but as a message transfer protocol. That’s why we have to talk about SOAP over HTTP or over TCP/IP etc.

I don’t understand why we should treat the issue of SOAP performance in isolation from everything else that it has to offer. Wide-adoption is a very important aspect of XML and SOAP. As I asked in my previous post, why doesn’t anyone complain about the performance cost of C++/Java/.NET over assembly programming? There is a huge advantage in modularisation, productivity, maintenance, reuse, etc with high-level programming languages/runtimes. That’s why we accept the added performance penalty from moving to new levels of abstractions for building systems. The same applies to distributed computing. Yes, one could build fast distributed applications using CORBA and ICE and sockets. Put enough care and such systems could even scale. However, how about cross-organisation integration? How about new abstractions for building distributed systems? How about tooling? Do we care if a protocol implementation can saturate the network if the organisation next door doesn’t understand it. When compared to the cost of communicating over the high-latency Internet, is the difference between SOAP processing and an optimised communications protocol so relevant given the interoperability, wide-adoption, tooling, etc. benefits? (Please note that I am not arguing in favour of completely ignoring performance). Yes, inside the firewall it may be possible and beneficial to build using CORBA or ICE or sockets solutions. If everyone had agreed on a very fast binary protocol, I would have been very happy. Would it have happened had XML not been around? I seriously doubt it. XML is here to stay, at least for the next decade or so.

Am I “running with the herd” as Michi suggests? Perhaps. But when that herd is the larger part of the industry I am happy to do that. And yes, the fact that even hardware vendors are investing in the XML processing solutions tells me how serious the industry is in making this SOAP thing work. So, I am happy to go to new places with the rest of the herd. I don’t want to stay at the old valley with no grass. I am excited about the opportunity to find new grass, help in making the herd larger, and get to know new places. The herd shows me that there is enough interest in investing to make sure that we reach the new destination so I can come up with new ways of building distributed applications. If I had believed that the herd was going towards a swamp, I would have said so. I have gone against the flow in the past 🙂

BTW… Michi, please have a look at Indigo . You’ll see why I made the comment about Microsoft and SOAP. Also, I suspect that other companies may be doing something very similar for their distributed computing platforms. It absolutely makes sense.

Is it irresponsible to support that document-centric solutions based on XML (either MEST – or REST -based)? As things stand at the moment, I don’t believe so. However, history will show us how wrong the SOAP advocates have been.

UPDATE (5 July 2005, 23:38 BST): This is Michi’s response to the above post sent to me via email with a request to publish it here rather than as a comment given my weblog’s limitations of accepting formatted comments (will try to correct this soon).

Savas, could you please explain to me how SOAP is *not* RPC? With SOAP, a client takes some parameters, invokes a service on a remote endpoint, the service processes the data, and eventually returns results. (Typically, the client blocks until the results are available, although it could use async invocation.) How is this different from CORBA or Ice? Or, to ask the question differently, what exactly is the the “request-response message exchange pattern” you mention and what exactly does SOAP do to support this that CORBA or Ice do not?

As to SOAP performance, it is atriocious, and unnecessarily so. We could have everything WS and SOAP can do, but much faster, if we didn’t use this idiotic protocol.

I also disagree with the assembly programming analogy. In fact, to me, it seems that many aspects of WS are much more complex and tedious than CORBA or Ice. Take WSDL-I wrote about some of its problems in issue 2 of our newsletter (http://www.zeroc.com/newsletter/issue2.pdf).

“And yes, the fact that even hardware vendors are investing in the XML processing solutions tells me how serious the industry is in making this SOAP thing work.”

No, I don’t see it like that at all. What has really happened is that the industry was silly enough to adopt an incredibly inefficient encoding. So inefficient, in fact, that a market window has opened for hardware vendors because software isn’t good enough to get the performance. The fact that people build hardware to support XML processing in no way vindicates the technical choices that were made. Rather, it does condemn those choices.

“Is it irresponsible to support that document-centric solutions based on XML (either MEST- or REST-based)?”

No, of course not, and that is not what I said. But it *is* irresponsible, IMO, to tell people that SOAP is fast, that everyone should be using it, and to dismiss arguments that show it to be an extremely poor protocol.

There are only a few distributed computing experts in the world. The whole industry looks to these experts for leadership. To me (counting myself as one of these experts), that imposes a moral obligation to judge things openly and honestly, taking past experience, research, and best practice into account to the best of my ability. As an expert, I cannot honestly recommend a protocol that requires a hundred times the bandwidth, CPU cycles, and latency of other protocols, especially when there is no functional gain for all that overhead.

Recent Posts

BrainExpanded – The Timeline

See "BrainExpanded - Introduction" for context on this post. Notes and links Over the years,…

1 week ago

BrainExpanded – Introduction

This is the first post, in what I think is going to be a series,…

1 week ago

Digital twin follow up

Back in February, I shared the results of some initial experimentation with a digital twin.…

2 weeks ago

Digital Twin (my playground)

I am embarking on a side project that involves memory and multimodal understanding for an…

10 months ago

“This is exactly what LLMs are made for”

I was in Toronto, Canada. I'm on the flight back home now. The trip was…

1 year ago

AI is enhancing me

AI as an enhancer of human abilities.

1 year ago