I have a bunch of things on my mind ready to go this morning, including the next in a series of
voicemailcasts with Doc Searls, a defense of XML when used as an object serializer, and the awful tendency of techies, esp those who work at Google, to have little respect for stuff that works, always wanting to reinvent without using their supposedly brilliant minds to evaluate their approach, and thereby burning decades of knowhow on a bonfire of geek vanity. Probably a few other things I'm not remembering at the moment.
#
Okay first up is the
26-minute podcast that follows Doc's
kickoff of our nascent series. You will feel like you're eavesdropping on a conversation between two old friends, which is totally what it is. I remember the first time I met Doc, in
Buck's restaurant in Woodside, CA. I also remember very clearly the day in
1999 I was at his house in Redwood City, helping him get going with his Manila blog, he was one of the very first, and understood blogging in an instant. Doc looks like
Wilford Brimley. He got the name Doc because he was a radio personality in North Carolina in the 1970s. He has a radio voice. In this voicemail I talk about Fargo, the masterful FX series. I just watched season 2 and am now working my way through season 3. I talk about my development process for the last decade or so. Very different from the previous 30 years. That's just the beginning. It's a content-rich podcast. Hope you listen and enjoy. Looking forward to Doc's rebut.
#
It's a fact that XML can be used to serialize complex objects from all kinds of programming languages. I know this because I do it. Have been since 1998. So when a geek says on Twitter that you can't, he or she could get
pushback from me. Here's a
thread that illustrates. Further, XML and JSON are equivalents as object serializers. I know because I generate both
JSON and
XML versions of the
feed for my blog. And as an experiment, my new XML-RPC implementation can use JSON encoding
everywhere, as an option. Of course there are not many servers that can understand this, so sticking with XML for now is the only practical option. I don't have any love for XML, but I do resent having to lose decades of progress for unrigorous technical thinking. I also don't like debating, anywhere, and especially not on Twitter.
🚀#
BTW, I recall now that there is an
issue with how to represent dateTime.iso8601 and base64 types in JSON.
#
A family story. My mother was a pure capitalist. She believed in hard work, being productive. She felt threatened by evidence of idleness. I drove her crazy. Even as a kid I would sometimes just sit in a chair in the living room of our apartment in Jackson Heights and think. Once she saw me sitting, lights off, no TV, no book, appearing to be doing nothing, and she lost her shit right there. Anyway, many years later, when I sold my company and then it went public, after years of begging me to get a job, her stock in the company was all of a sudden worth a lot of money. It was the only time I remember getting her unqualified approval. She boasted, even when I could hear, that I was profitable. In other words, the money and time she put into raising me made her money.
❤️#
Interesting tweet, says that it was
19 years ago today that the
enclosure element was added to RSS, by yours truly, in an effort to add
payloads to RSS. I looked it up, and they are right. And through the rest of
January 2001, Adam and I were trying out neat hacks that could move large (for the day) audio and video files back and forth between our offices in California and Europe. Here's the
piece I wrote that day explaining the idea. The site, twowayweb.com, is gone, but the article is preserved in many places, including
archive.org.
#
You want to know how bonded Doc and I are. That
day in 1999 when we got his blog started, I saw he had a blue box in his office. I asked what it is. He didn't really know. I took it home with me, I didn't even ask permission. I realized even at the time this was
nervy of me, but Doc was happy that I did. Turns out it was a
Cobalt Qube, a revolutionary product, an
idea that seemed like the future in 1999, but sadly its future has yet to arrive. Hopefully this decade it will. (On second thought maybe it was the precursor of the
Raspberry Pi?)
#
- John Naughton wrote briefly about the difference between Facebook the company and Facebook the billion-user network. There aren't many people in journalism who see this dichotomy. He wrote to say he had a similar experience on the rail in the UK, viewing the backs of people's houses and wondering what their lives were like. #
- I often do that when travelling by train in the UK in the dark, because British towns always turn their (unadorned, unguarded) backs to the railway while keeping their (brushed and spruced) fronts for the street or the road.#
- It's hard for those of us who predate Facebook to not feel challenged by it and even to reject it. But it is what it is. The open web didn't organize the world, Facebook did. I wish it were otherwise, of course. #
- Ignoring Facebook as a technology base that we'll build on in the future, would be like refusing to listen to the Beatles in the 60s. You could do it, but if you were a musician or any kind of artist really, you'd be missing a common language that all your users understand. You'd be ignoring an important tool.#
- The real platform is the minds of all the people. #
- There's a book I read 30 years ago that opened my eyes to this -- Marketing Warfare by Ries and Trout. #
- People don't understand this "battleground as the minds of the users" idea is why people don't grok Bloomberg. He really gets it and has the resources to develop all this virgin territory that I'm sure he understands every bit as well as Trump or Murdoch. #
- I got an email from Seth Godin asking for clarification of one of yesterday's mini-posts about how Google can index the web without crawling.#
- Doc said that Google and Bing are doing an increasingly bad job of archiving the history of the web. When he looks for something he wrote 20 years ago, the search engines can't reliably find it. He also said they're not indexing the current web like they used to. He experiences it thus: Google can't find one of his recent posts, but then after he visits it, presumably in Chrome, it can. #
- From this I conclude:#
- Google isn't crawling his site. In the past it would check frequently updated pages, such as blogs, every few minutes, for new links. On discovering one, it would read it, add it to their index, and then it would be findable in Google. #
- But when he visits one of those pages in Chrome, then, a few minutes later it can be found in the search engine. It appears to be using Doc's human behavior to find new pages to index. They can do this now because they have a popular web browser, so they can retire their old method of discovering links and let the users do their crawling. #
- My own experience. For what it's worth, I've found that Google is still really good at finding my old stuff, as long as there are some good unique words I can search for. If I search for "future-safe archives" for example, it gives me back a really good set of results. But the other day I was trying to find something I wrote about Obama and his online social net he let dwindle after the 2008 election, and although I believe I wrote about this a number of times, I could only find one piece, by going to the January 2009 archive page and scanning it with my eyes. #
- Caveat: This is all based on tea-leave reading. Neither of us have any insight into what Google is actually doing to maintain its index. #