I'm John Gilbert

I am a creative that has focused on digital for the last 15 years. I started when Macromedia was new and Visual Basic 4 was hot. I taught myself the ropes, studied Computer Science in college (dropped out) and ended up getting a degree in design. Luckily, the internet came along, and my passion for the web was born. I have been combineing my interests in design and technology since day one of my career and I don't intend to do anything different in the future.

15 years

I've spent the last 15 years working with this small bunch. There is a lot more but you get the idea... AAA, Agilent, AT&T,Audi, Best Buy, Chilis, Chipotle, Chiquita, Duracell, E*Trade, Epson, Finish Line, Gatorade, Grey Advertising, GSDM Advertising, Hallmark, Hewlett Packard, Jamba Juice, Jack Links Beef Jerky, Jim Beam, McDonalds, Motorola, Molson Coors, Naked Juice, Nickelodeon, Noodles and Company, Pentax, Publicis & Hal Riney, Quiznos, Qwest, Red Robin, Sports Illustrated, US Cellular, Vail Resorts, Vox Vodka, and Walmart

Some Awards: 2008 Effie, Webbiea, 8 FWA’s, 3 Adobe Site of the Day, Awwwards’s, Denver 50, W3 Best of show, Create Magazine - Creative of the year Awards, IAC Gold and Best of Show, Horizon Interactive Awards and more...

creativity

technology

crazy ideas

Category Archives: Misc

Tuesday, October 12th, 2010

Watch These Colors Dance, Then See How It Was Done – Some amazing photography here

Posted in Misc

Tuesday, October 12th, 2010

Great video for Brother Printers

Posted in Misc

Friday, October 8th, 2010

Sesame Street Old Spice Parody

Posted in Misc

Friday, October 8th, 2010

Microsoft and Adobe Chiefs Meet to Discuss Apple: Woah

DESCRIPTIONLeft Brendan Smialowski/Bloomberg; right, Matthew Staver/Bloomberg. Shantanu Narayen, president and chief executive officer of Adobe. At right, Steve Ballmer, chief executive officer of Microsoft.

Steven A. Ballmer, Microsoft’s chief executive, recently showed up with a small entourage of deputies at Adobe’s offices to hold a secret meeting with Adobe’s chief executive, Shantanu Narayen.

The meeting, which lasted more than an hour, covered a number of topics, but one of the main thrusts of the discussion was Apple and its control of the mobile phone market and how the two companies could team up in the battle against Apple. A possible acquisition of Adobe by Microsoft were among the options.

The New York Times learned about the meetings through employees and consultants to the companies who were involved in the discussions that took place or familiar with their organization, all of whom asked not to be identified because they were not authorized to speak publicly by Microsoft or Adobe. Those involved in the meeting, from its logistical set up to the discussion that took place,   were instructed to stay  quiet about the two companies holding council.

In the past, Adobe and Microsoft have been rivals with competing software and the companies became really combative in 2007 when Microsoft began promoting  Silverlight, its software plug-in for the Web that directly competes with Adobe Flash.

Holly Campbell, senior director of Adobe’s corporate communications, did not deny the meeting took place when asked via e-mail. “Adobe and Microsoft share millions of customers around the world and the C.E.O.’s of the two companies do meet from time to time,” she said. “However, we do not publicly comment on the timing or topics of their private meetings.”

Microsoft said it did not “comment on rumors/speculation.”

One person familiar with the discussion said the two companies had talked about the blockade that Apple’s chief executive, Steven P. Jobs, had placed on Adobe’s Flash software for its hand-held devices and whether a partnership by Adobe and Microsoft could fend off Apple, which continues to grow at juggernaut speeds.

Another person with knowledge of the talks  explained that Microsoft had courted Adobe several years ago. But the deal never moved past informal talks as Microsoft feared that the Justice Department would most likely block the acquisition on antitrust grounds.

This person noted that at the time, Microsoft was the dominant force in technology and Google and Apple were not the giants they are today.

Randal C. Picker, a professor of law of the University of Chicago, said in a telephone interview that the technology landscape was drastically different now and that an acquisition or partnership of this nature would likely not be halted.

“There’s not a question that the atmospherics of Microsoft are much more different that they were a decade ago,” he said. “I think you could imagine Microsoft being a more aggressive purchaser in a world where they are no longer an 800-pound gorilla.”

Professor Picker said the Justice Department  and the Federal Trade Commission were focused on other large technology companies and consumer-related issues.

Posted in Misc

Thursday, October 7th, 2010

infographic coolness: The Rise of Social Network Ad Spending

Posted in Misc

Tuesday, October 5th, 2010

What Really Happens When you Fill out a CAPTCHA (VIDEO) – Very cool

Posted in Misc

Monday, October 4th, 2010

Reading this just made me fatter – Burger King’s four-Whopper patty pizza Burger

Posted in Misc

Wednesday, September 29th, 2010

Worst Ads of 2010: The Winners! via @fastcompany

Posted in Misc

Wednesday, September 29th, 2010

Below-the-Fold Gold: Advertising online and the “industry standard” risks.

Photograph used under Creative Commons license by Flickr user Pera Ola Wiberg ~ Powi.

There’s a myth that needs busting. Top leaderboard banners and 250×300 square ads on the right-hand sides of web pages are not the most valuable pieces of real estate. They don’t deserve the highest CPMs due to greater perceived visibility, and they even devalue publisher brands.

Ask yourself this: How often is what you’re looking for on a website actually on the top of the page? Not all that often, actually.

In a HUGE study of 60 people using the Internet to do everyday tasks—such as browsing news headlines and video clips, finding the latest sports results, booking a flight for vacation, selecting a restaurant for dinner, and choosing a movie and showtime—the content of interest was found to be further down the page and required users to scroll down. In many cases, users scrolled away before the above-the-fold display ad even loaded. No ad, no impression, no value.

In one example, an ad that occupied 1/3 of YouTube’s homepage above the fold didn’t get the attention of our survey respondents. Only one out of the 12 people who encountered the ad recalled seeing it.

Logically, an ad is more effective if it’s placed next to content the user wants to view. Like adjacent to the reviews on Yelp or to the definitions on Dictionary.com. This type of placement gives the ad more visibility and improves the chance of making an impression on the user.

The classic above-the-fold ad structure is not only ineffective, but it also risks devaluing publisher brands. So many homepages are built with this structure that they are all starting to blend together. Brand identity is lost in the clamor to offer advertisers the space they mistakenly think they want.

Someone—whether it’s a forward-thinking advertiser or a courageous publisher—needs to start paving ground below the fold. There’s an unclaimed pot of gold down there.

Dan Hou, senior product strategist, contributed.

Good article on taking a little risk and pushing your clients advertising below the fold. I’ve tried a couple time to help clients understand this issue but it usually results in “thats not industry standard” talk. How about a javascript div that floats down the side of the page with the content?

Posted in Misc

Wednesday, September 1st, 2010

For the geeks: The problems with ACID, and how to fix them without going NoSQL

(This post is coauthored by Alexander Thomson and www.cs.yale.edu/homes/dna/“>Daniel Abadi)

It is a poorly kept secret that NoSQL is not really about eliminating SQL from database systems (e.g., see these links). Rather, systems such as Bigtable, HBase, Hypertable, Cassandra, Dynamo, SimpleDB (and a host of other key-value stores), PNUTS/Sherpa, etc. are mostly concerned with system scalability. It turns out to be quite difficult to scale traditional, ACID-compliant relational database systems on cheap, shared-nothing scale-out architectures, and thus these systems drop some of the ACID guarantees in order to achieve shared-nothing scalability (letting the application developer handle the increased complexity that programming over a non-ACID compliant system entails). In other words, NoSQL really means NoACID.

Our objective in this post is to explain why ACID is hard to scale. At the same time, we argue that NoSQL/NoACID is the lazy way around these difficulties—it would be better if the particular problems that make ACID hard to scale could be overcome. This is obviously a hard problem, but we have a few new ideas about where to begin.

ACID, scalability and replication

For large transactional applications, it is well known that scaling out on commodity hardware is far cheaper than scaling up on high-end servers. Most of the largest transactional applications therefore use a shared-nothing architecture where data is divided across many machines and each transaction is executed at the appropriate one(s).

The problem is that if a transaction accesses data that is split across multiple physical machines, guaranteeing the traditional ACID properties becomes increasingly complex: ACID’s atomicity guarantee requires a distributed commit protocol (such as two-phase commit) across the multiple machines involved in the transaction, and its isolation guarantee insists that the transaction hold all of its locks for the full duration of that protocol. Since many of today’s OLTP workloads are composed of fairly lightweight transactions (each involving less than 10 microseconds of actual work), tacking a couple of network round trips onto every distributed transaction can easily mean that locks are held for orders of magnitude longer than the time each transaction really spends updating its locked data items. This can result in skyrocketing lock contention between transactions, which can severely limit transactional throughput.

In addition, high availability is becoming ever more crucial in scalable transactional database systems, and is typically accomplished via replication and automatic fail-over in the case of a crash. The developer community has therefore come to expect ACID’s consistency guarantee (originally promising local adherence to user-specified invariants) to also imply strong consistency between replicas (i.e. replicas are identical copies of one other, as in the CAP/PACELC sense of the word consistency).

Unfortunately, strongly consistent replication schemes either come with high overhead or incur undesirable tradeoffs. Early approaches to strongly consistent replication attempted to synchronize replicas during transaction execution. Replicas executed transactions in parallel, but implemented some protocol to ensure agreement about any change in database state before committing any transaction. Because of the latency involved in such protocols (and due to the same contention issue discussed above in relation to scalability), synchronized active replication is seldom used in practice today.

Today’s solution is usually post-write replication, where each transaction is executed first at some primary replica, and updates are propagated to other replicas after the fact. Basic master-slave/log-shipping replication is the simplest example of post-write replication, although other schemes which first execute each transaction at one of multiple possible masters fall under this category. In addition to the possibility of stale reads at slave replicas, these systems suffer a fundamental latency-durability-consistency tradeoff: either a primary replica waits to commit each transaction until receiving acknowledgement of sufficient replication, or it commits upon completing the transaction. In the latter case, either in-flight transactions are lost upon failure of the primary replica, threatening durability, or they are retrieved only after the failed node has recovered, while transactions executed on other replicas in the meantime threaten consistency in the event of a failure.

In summary, it is really hard to guarantee ACID across scalable, highly available, shared-nothing systems due to complex and high overhead commit protocols, and difficult tradeoffs in available replication schemes.

The NoACID solution

Designers of NoSQL systems, aware of these issues, carefully relax some ACID guarantees in order to achieve scalability and high availability. There are two ways that ACID is typically weakened. First, systems like Bigtable, SQL Azure, sharded MySQL, and key-value stores support atomicity and isolation only when each transaction only accesses data within some convenient subset of the database (a single tuple in Bigtable and KV stores, or a single database partition in SQL Azure and sharded MySQL). This eliminates the need for expensive distributed commit protocols, but at a cost: Any logical transaction which spans more than one of these subsets must be broken up at the application level into separate transactions; the system therefore guarantees neither atomicity nor isolation with respect to arbitrary logical transactions. In the end, the programmer must therefore implement any additional ACID functionality at the application level.

Second, lazy replication schemes such as eventual consistency sacrifice strong consistency to get around the tradeoffs of post-write replication (while also allowing for high availability in the presence of network partitions, as specified in the CAP theorem). Except with regard to some well-known and much-publicized Web 2.0 applications, losing consistency at all times (regardless of whether a network partition is actually occurring) is too steep a price to pay in terms of complexity for the application developer or experience for the end-user.

Fixing ACID without going NoSQL

In our opinion, the NoSQL decision to give up on ACID is the lazy solution to these scalability and replication issues. Responsibility for atomicity, consistency and isolation is simply being pushed onto the developer. What is really needed is a way for ACID systems to scale on shared-nothing architectures, and that is what we address in the research paper that we will present at VLDB this month.

Our view (and yes, this may seem counterintuitive at first), is that the problem with ACID is not that its guarantees are too strong (and that therefore scaling these guarantees in a shared-nothing cluster of machines is too hard), but rather that its guarantees are too weak, and that this weakness is hindering scalability.

The root of these problems lies in the isolation property within ACID. In particular, the serializability property (which is the standard isolation level for fully ACID systems) guarantees that execution of a set of transactions occurs in a manner equivalent to some sequential, non-concurrent execution of those transactions, even if what actually happens under the hood is highly threaded and parallelized. So if three transactions (let’s call them A, B and C) are active at the same time on an ACID system, it will guarantee that the resulting database state will be the same as if it had run them one-by-one. No promises are made, however, about which particular order execution it will be equivalent to: A-B-C, B-A-C, A-C-B, etc.

This obviously causes problems for replication. If a set of (potentially non-commutative) transactions is sent to two replicas of the same system, the two replicas might each execute the transactions in a manner equivalent to a different serial order, allowing the replicas’ states to diverge.

More generally, most of the intra- and inter-replica information exchange that forms the basis of the scalability and replication woes of ACID systems described above occurs when disparate nodes in the system have to forge agreement about (a) which transactions should be executed, (b) which will end up being committed, and (c) with equivalence to which serial order.

If the isolation property were to be strengthened to guarantee equivalence to a predetermined serial order (while still allowing high levels of concurrency), and if a layer were added to the system which accepts transaction requests, decides on a universal order, and sends the ordered requests to all replicas, then problems (a) and (c) are eliminated. If the system is also stripped of the right to arbitrarily abort transactions (system aborts typically occur for reasons such as node failure and deadlock), then problem (b) is also eliminated.

This kind of strengthening of isolation introduces new challenges (such as deadlock avoidance, dealing with failures without aborting transactions, and allowing highly concurrent execution without any on-the-fly transaction reordering), but also results in a very interesting property: given an initial database state and a sequence of transaction requests, there exists only one valid final state. In other words, determinism.

The repercussions of a deterministic system are broad, but one advantage is immediately clear: active replication is trivial, strongly consistent, and suffers none of the drawbacks described above. There are some less obvious advantages too. For example, the need for distributed commit protocols in multi-node transactions is eliminated, which is a critical step towards scalability. (Why distributed commit protocols can be omitted in distributed systems is non-obvious, and will be discussed in a future blog post; the topic is also addressed at length in our paper.)

A deterministic DBMS prototype

In our paper, entitled “The Case for Determinism in Database Systems”, we propose an architecture and execution model that avoids deadlock, copes with failures without aborting transactions, and achieves high concurrency. The paper contains full details, but the basic idea is to use ordered locking coupled with optimistic lock location prediction, while exploiting deterministic systems’ nice replication properties in the case of failures.

We go on in the paper to present measurements and analyses of the performance characteristics of a fully ACID deterministic database prototype based on our execution model, which we implemented alongside a standard (nondeterministic) two-phase locking system for comparison. It turns out that the deterministic scheme performs horribly in disk-based environments, but that as transactions get shorter and less variable in length (thanks to the introduction of flash and the ever-plummeting cost of memory) our scheme becomes more viable. Running the prototype on modern hardware, deterministic execution keeps up with the traditional system implementation on the TPC-C benchmark, and actually shows drastically more throughput and scalability than the nondeterministic system when the frequency of multi-partition transactions increases.

Our prototype system is currently being reworked and extended to include several optimizations which appear to be unique to explicitly deterministic systems (see the Future Work section in our paper’s appendix for details), and we look forward to releasing a stable codebase to the community in the coming months, in hopes that it will spur further dialogue and research on deterministic systems and on the scalability of ACID systems in general.

Posted in Misc

  • stuff

  • Category Filter