A Primer on L2 (ft. MSilb7) | Dune Arcana #4

"L2s (Arbitrum & Optimism) account for 30-40% of all transactions on Ethereum, but consume only 2% of the total gas..." That is one of the trends mentioned by Tomasz Tungas in his talk at DuneCon regarding The State of Web3 in 2022. This week we explain what Layer 2 is with on-chain data.

Transcript

Jackie  0:00  

Good morning, everyone. Thank you for your patience. We had some technical difficulties, but now we're ready to go. So, welcome to our Dune Arcana Session Four, where we are going to explore the current Web3 trends and news with unchained data. 

So, my name is Argo Pierce day-to-day with me as usual. We've also got Boxer. And then we’ve got a guest, Michael CyberLink, our very own archwizard from Optimism. So today, we are going to take you through a ride on “What is L2?” And what kind of data analytics you can do with L2. 

Okay, so let's get started. At Dunecon, we had a talk with Thomas Tongass where he was saying one of the trends in crypto in 2022 is that L TOS (namely Arbitrum and Optimism) account for 30 to 40% of all transactions on Ethereum. 

But they consume only 2% of the total gas, cementing their value as a trend, right? Like 30 to 40% of all transactions and 2% of the total gas only. So looking at this trend, in terms of the graph, this is what we will see. 

So, let's actually go to the complimentary dashboard. I actually had it open already. Okay, so throughout this session, we'll be using this dashboard to kind of walk through all the data visualizations and break down the actual data analytics part. 

With the trend that we were just talking about, let's take a look at this graph to the bottom right corner. Here is a graph where, on the y-axis, we have the time, so the day and then the y-axis represents the transaction. 

It's the percentage of the transactions. On this graph, we've got the Ethereum number of transactions and then optimistic transactions and arbitrary transactions. 

Here, if we hover over them, we can see that the blue bars, which represent the theorem, may not…transactions. You can see it's hovering at 70%...75%...65%. 

And then Optimism is 20%, and arbitrary is like 14%. We can clearly see the trend that Thomas is talking about right through this graph. A lot of transactions happen on the L2  chains one of them, namely Arbitrum and Optimism. 

Then, looking at the gas, we can see that same thing, like on the x-axis. We got the time, so here's weekly, and then the y-axis represents the percent of the gas being consumed. 

And then, here, we can see that the percent of the gas being consumed by us, like Arbitrum, Optimism, XAVC, Inc, dydx, all those are hovering around 2%. Sometimes, it can be even lower than that. 

Clearly, this is a large trend that we're seeing. But you might be wondering, right? What exactly is Layer 2s? And what other things can we do with this Layer 2 data? We're gonna start with “What is Layer Two?” 

So, Layer 1 comes before Layer. We're going to talk about Layer 1 first in order to understand Layer 2. Layer 1s is not really a concept that's just unique to Ethereum. It actually is like unique across a lot of chains. 

Bitcoin Ethereum salon, they're all Layer 1s. These are standalone blockchains themselves. These are the basic base layers. You can think of them like Ethereum. It has its own mechanism, consensus mechanism parameters, tuning, all the good stuff. 

And then with Layer 2 are a scaling solution that helps with increasing the capabilities of L1s by handling transactions off-chain. Alright, let's dig into it a little bit more before we get to the actual benefits of it. Okay, why do we need our TOS?

Why do we need this scaling solution off of an L1 chain, so when the demand is high…think about if you're in crypto already in 2017…to experience the ICO mania (the crypto kiddies, right?), and then in 2022, that DeFi summer and even last year when NFTs were booming, when the demand is high, traffic really gets congested on Ethereum. 

As a result, sometimes you're paying $50 or $200 just to do a simple swap and then you have to wait for a really long time. And that is like a really bad user experience. Right? 

Sometimes it's just not really affordable. People can't pay $50 to just stop some tokens. What we can see with unchained data is that…so actually, let's go to the dashboard one more time.

Okay, we're talking about the traffic getting congested, right? Here is a graph that we can see. We have the number of daily transactions in the blue. And then we have the daily median guests in the red-orange. 

And then you can see here that these two are very closely correlated. When the number of transactions goes up, the median gas also goes up. Effectively,  just think about it as a transaction fee you have to pay. 

When there's a lot of demand, then the gas also jumps up. And you can totally see it like 2020 summer, it really spikes up. If just to make this a little bit more concrete in terms of dollar values, let's take a look at this graph, which is the Ethereum average and median transaction fees. 

You can kind of see, on the y-axis, right, the axis actually goes up to like $80. Sometimes they have spiked up to even $100 In terms of transaction fees. And that's just really bad — not affordable. 

And in contrast to that, if we look at Optimism, which is an example of a Layer 2 solution, we can see that the y-axis actually goes up to single-digit dollars, right? It's like a huge reduction in terms of the fee that you need to pay. 

Clearly, we need something like Layer 2 to help us solve the scaling problem. Okay, so coming back to our slide. Now, we've kind of seen through the data what the problem is withoutL1. We now are at a point where, you know, we really know that we need a scaling solution to help us with a lower transaction fee and a faster transaction time. 

Okay, so now we've established why we need L2U. Let's talk about different ways you can…we've established why we need a scaling solution. Let's talk about different ways you can scale and then we will get to the part about why we need an L2. 

So, first of all, we can start by just scaling the base layer of the blockchain. If we want to scale Ethereum, you'll often hear about sharding. That's one way to do it. And then another way you can scale is through L2s, right? So, through L2 scaling, you're offloading some of the work. 

The execution part of the work to the L2 chains is built on top of L1s. It's basically leveraging the security of L1 by anchoring its state into L1. Some examples ofL2 use include Arbitrum Optimism, LoopRing, and VK. Sync. 

And in addition to that, another way that you might hear how we can like the scale is through side chains. Side chains are EVM-compatible, independent blockchains, with their own consensus models and block parameters. 

Examples of that are Gnosis Chain and Polygon. I put a star next to Polygon because Polygon itself is also kind of like a Layer 1 solution, but they are also a sidechain. 

Because they're EVM compatible. And through bridges, they can talk to Ethereum. Okay, coming back to L2S, even within the L2S themselves, there are different ways that you can achieve scaling. Some of the ways include state channels. 

If you're familiar with Bitcoin’s Lightning Network, that's one of the examples. And then another way is through Plasma. Think about the OMG network. And then also you can do it through roll-ups. The state channels Plasma is out of scope for this current session, we're gonna just focus on roll-ups because that is the preferred auto-scaling solution. 

And if you want to know more about it, you can go read this post by Vitalik Buterin on a roll-up-centric Ethereum roadmap. Okay, roll-ups basically bundle or “roll-up” transactions and submit them to L1 in batches. 

So, this is like a good point where we kind of like visit this concept, which is the scalability trilemma, and try to help understand like why we can't just simply, you know, use like our Ethereum base chain to kind of achieve what we want it to do in terms of scalability. 

Generally, there's like the scalability trilemma states that there are three desirable properties of a blockchain: security, scalability, and decentralization, right? Think of them as a triangle here. 

And the trilemma states that the three cannot be achieved simultaneously for a blockchain only two out of the three. 

Okay, if you want to have security and decentralization, then you will not get to scalability, right? And if you have scalability and decentralization, then you cannot get the security.

Also, in the third scenario, if you want scalability and security then you are sacrificing decentralization. Think about it as your local big bank IT infrastructure. It's pretty scalable; it's pretty secure. 

But the price that you have to pay, like the trade-off you're making is that they're running these big, big computers. And they are able to just censor you if they want to. So, you're sacrificing the decentralization part. 

But the way you write the Alto can solve this problem. They can also have a higher transaction speed, and higher transaction throughput, and also inherits the security of the underlying L1 it's built on. 

Effectively with L2, you've got all three. You've got decentralization and security, which are the two properties that we desire in Ethereum. And you've got scalability, right? Okay. 

How are L2, or more specifically roll-ups, today? We're going to talk about how to achieve this. 

So, the roll-ups are the general-purpose scaling solutions. They basically execute transactions outside of the outline. They do all the calculations on the L2O chain themselves so things are much cheaper. And then after that, they will post the transaction data onto L1. 

So, they write the call data…back it up to L1. And then they're using L1 as a data availability layer. 

And then they also derive security from the outlines. So, in terms of the flow of the process, it goes like this. I'll tell you what…execute the data, does all the calculation for the transaction, takes the data, and maybe does some sort of compression. It rolls up many, many transactions into one batch. Then, it posts that one batch as part of the core data onto the L1 chain. 

Okay, so now armed with all this knowledge of what an L2 is…like what roll-ups are if we revisit the query that we were just looking at. 

So, if we remember going back here, we were looking at this particular Query, and we were like okay, so L2 gas as a percent of ETH gas is basically 2%. So in order to get this query on the other side, if you look inside of it, there's a bunch of addresses that we're filtering for. 

And then if you take Optimism, as an example, you can see here, this is an address for Optimism. It’s a canonical transaction chain address. 

When we were talking about L2 rolling batches of information into L1, it's basically writing all the data onto a specific address. That's how it's posting data onto L1s. And I'm sure Michael will give you more specific examples of how this is done in action. 

Okay, so one more thing to mention, before we really take a deep dive into this. So, in terms of the roll-ups we talked about, we came from what are our L2s and then different ways you can scale our 2s. And then we are focusing on the rollups. 

And then also within roll-ups, we even have this concept. There's zero knowledge of roll-ups and optimistic roll-ups, right? So, what are the differences and similarities? 

So, for both roll-ups, the ZK roll-up, and optimistic roll-up, they roll-up transactions in a single batch and post the transaction data to L1. But they differ primarily on how this transaction data is posted, and how the data is verified. 

Okay, so for the ZK roll-up, they use validity proof. What that means is that they run the computations off-chain and calculate a “zero knowledge” proof and then post the data along with this proof on L1. 

And generally speaking, it's mostly application specific. Whereas for an optimistic roll-up, right, I use it as fault proof. So, the transactions are assumed to be valid and optimistic, and they can be challenged. 

These transactions that are posted can be challenged, if necessary. Generally, there's a seven-day period where when the transactions are posted into our L1, and you can challenge the validity. Then, only the optimistic roll-up will run the computations on-chain when the fault is suspected. 

So, if someone flags, “Hey, something is wrong with this,” then we'll run the computations on-chain. And often, it's largely generalizable…actually corrected with the validity proof for ZK. It’s the only proof that's been transmitted to the L1 chain. 

Okay, so that was a lot to digest. Just to recap, we went from what is L2 to what is roll-up. Two specify, we talked about optimistic roll-ups. For the rest of the session, we're going to use Optimism, so a type of optimistic roll-up was used as an example. 

We'll help us make a deeper dive into how L2 roll-ups data into L1, how to look at oil and gas fees, and what to call data compressions. So, now I will welcome Michael to be on the stage to take us through the rest of the journey.

Michael  15:20  

Awesome. I can take over the screen. Yeah, thanks. That's great. Yeah, definitely a good primer on everything, too. 

Yeah, so I’m Michael, a data analyst at OP Labs and contributor to Optimism in the autism collective. 

I just wanted to give concrete examples of everything that we just went through. This looks like an on-chain if you're looking through Etherscan or transaction tables and Dune. It’s how to pull out what's actually happening. 

So, a cool thing is the whole scope of Layer 2 data. You could look at what's happening with the application layer and what users are doing. That stuff is all pretty similar. You're looking at attacks on Layer 1 versus attacks on Layer 2 to be fairly similar. I can share my screen.

Cool, yeah. So yeah, with that context, I wanted to just focus here on what things are kind of unique about Layer 2. You know, specifically about data being sent between layers, how gas fees are calculated, some of the efforts going on to eventually lower these fees, and everything throughout there. 

It's kind of like what we mentioned before. Yeah, how he said it from L1 and how our fees are calculated. 

But like everything, probably a lot of the exact detail here will be outdated very shortly. New protocol versions are coming out with new IPs going through. 

In general, because of how these things work, it will still help with understanding how to adapt and adjust to some of these protocol changes that will come through. 

And actually, I'm gonna share my screen in a different mode so maybe that’s easier. There we go. Hold on, technical difficulties earlier with my problem. Okay, now we got it. Cool. So to start here, we'll walk through kind of the lifecycle of a transaction. 

If you use Optimism, use any other kind of Layer 2 to just kind of connect your wallet. Send a transaction as you would anywhere else. For example, here is an NFT trade on quixotic now known as Quix. It’s a general kind of transaction 2 transfer, ERC 721 transfer. 

So, what you'll see in this transaction if you scroll to the very bottom is a big input data thing that kind of tells you all the information about the transaction. It's what a user sending a transaction has sent to the EVM to then get all this computation altogether. We’ll talk about how all that happens. 

This is a very nice structure version that Etherscan will do. But at the Ross farm, it's like this huge long-byte string, using Dune or any other kind of change checker. You'll see the data field or input data field, that's just this whole byte string. This is known as call data. 

That’s important to go through. But here's the information that, when we sayL2, sends data to…Oh, it's this data showing what the user sent to theL2 that's actually being sent. 

You also see in this transaction…after some time has passed, this transaction was sent out to L1 Ether because it’s a nice way of linking those things together for this transaction. I sent where I can see listed on our L1, and this call data here will be relevant for that. 

As you mentioned, if you click on any of these, like the transaction batch index here, you'll see all the other L2 transactions that were included in this one batch. 

In this one specifically, just from December of last year, there are 140 transactions included in the batch, and we'll see how this all kind of comes through to a L1. 

Just for kinda closing context here, before we go into what the L1 transaction looks like, these batches usually like every few minutes, and definitely vary by different kinds of protocol designers, any kind of job that might happen. 

So, you can see recent arbitrary times in batches increased a little bit. Optimism time between batches decreased a little bit beyond that, but you know, historically they kind of tracked pretty closely about a few minutes between batches. 

But cool. So, kind of coming back to transactions then going down to L1 actually happening. If we follow the link from the L2 transaction down to what its mission was, that's this transaction, so using it, I want you to scan it here. 

You'll see an event here with all the information about the batch or just rough information to link options back to which L2 batch they were in. 

See, the starting number was of previous elements. How many transactions were included up until this batch? This batch was 140 elements, and you could link things up through this. 

And then you look through this transaction, and similarly, scroll all the way down to this input data. You see this append sequence or batch call, and then a whole bunch of zeros. If it did extend all the way, it would go under the ground like a whole bunch of data there. 

So this data, as of December of last year, before a bunch of other upgrades happened, is the input data we showed from our L2 transaction. It is all just appended together with some other data into L1 transaction. It can take your call data for your L2 transaction and like Ctrl F to find it on Layer 1. 

We say Layer 2 data is sent to Layer 1 like this is it actually happening? Your data just gets taken and sent along with all the other transactions in this batch. 

This is expensive, as you can see. It involves taking all this data and appending it together, kind of long. And the more data is sent to L1, the more expensive it is to send. 

So, we'll jump into a little bit of how this fee model works and why that is. So backing up first, what is gas we refer to a little bit earlier, and some definitions from the theories. You can think of it like you can get your gas fee. 

It's how much you pay to conduct a transaction on Ethereum. You know, the Ethereum website defines this as a unit measuring the amount of computational effort to execute specific operations on Ethereum. 

So, if you think about it, anytime you've done any kind of data analysis, like transaction fees, it's just how much gas or transaction uses. We'll get into how to calculate that times the price of gas at that moment, whether it's 100, 50, 20, etc. 

And what's important to know here is whether the part of the gas that really influences roll-ups is like what the cost of gas is in the call data or it's like the input portion. If you go back up here, look at all these bytes or look at this whole string, you can break it up into bytes. 

If you look at the L to call data here, like six zeros, the first byte F4, second bite, all the way through. Each of those bytes has a cost to send to Ethereum. 

So, in the Ethereum yellow paper, you'll see it's four times the number of zero bytes plus 16 times the number of nonzero bytes. That's your total call data gas. And that is, if you hear call data, that's like the big costs for roll-ups today. 

And we'll show that in a nice visual segment later, too. So yeah, going back to this L1 bap submission and now kind of understanding like each of these bytes has a cost associated with it, zero bytes of cost might look at this and be like. That's a lot of information. 

How do we make this smaller? Or how can we make this cheaper? That's where call data compression comes in. Do you take all the data and just make it smaller? 

Yeah, I mean, it kind of works. But you can see a link at the bottom here. There are some you can search like some blog posts that we posted about doing this call data compression. Yeah. 

And so, as of now, both Optimism and Arbitron, to my knowledge, do call data compression. A good source of cost savings there. And what that looks like…so think of this before you get this whole huge array of a bunch of zeros here with compression…I took all the zeros going away. It's very satisfying to see all the numbers changing. 

But after looking at the impact here, if you're just looking at a screenshot of a Dune Query and what the results are, you can read (as you can see here), there's a canonical transaction chain. This a contract on L1 that transaction batches are currently on two. 

So this query is doing this segment that you can see, taking transactions before compression was launched in the month of February of this year versus the month after compression was launched. And then just for comparison, showing what the last 30 days look like here as well. 

You're going to look at how much the L1 footprint is of an L2 transaction. So, how much gas on L1, the transaction is using before compression, this gray bar like higher about like four to 6k gas per transaction. 

After compression, you saw it dropped to about three. And it definitely fluctuates by transaction type, compression algorithms, and things like that. And how you measure this over time. So, there's this thing called the Call Data Compression Ratio. 

Yeah, how do I find this? So, as you can see up top in this query, extracting bytes and sequels is kind of messy, so just copy and paste this stuff here. This is just taking this from our data string. 

This is like taking a query from a transactions table from the status string. Do this mess to get how many nonzero bytes there are and multiply by 16. It’s a whole other mess to get the number zero bytes. Multiply it by four, and you'll get the amount of call data gas. 

So, you can just measure the amount of call data used on L2 versus the on-call data used in Layer 1 transactions and just calculate the ratio of how much call data is actually used on L2 versus how much is actually paid on L1. 

So, it's relevant there. And just like before call data compression, you could do this Find/Replace to find your transactions on L1. 

That's compressed. That's all done at the node level you can't do it on the Etherscan anymore because it’s all compressed at nonhuman readable bytes. 

But yeah, for contracts, how Layer 2s have so far done some work to try to reduce fees? Yeah, and why it matters. Why even do what's called data compression matters?

So, think about how you are going into howL2 transaction fees are calculated and what makes them magically cheaper. 

Yeah, so transaction fees on Layer 2. And this is kind of specific Optimism. I'll show the arbitrary transaction fee calculation — we're gonna get to it. 

But I'm personally more familiar with Optimism. So, focus on that. Transactions on Layer 2 are kind of split up into two different types of fees. 

So one of them, we call the L1 data fee, is kind of what I just went through. It’s the cost to take this input data on L to submit it down to L1, and then it's the L2 execution fee, which is just like the cost of running a transaction on Layer 2 as mentioned before. The execution is off-chain. 

That's where it gets way cheaper. So, going deeper into the own data feed and decomposing it a little bit, you will see that, in Dune, there's a field called L1 gas used. 

So, it's just a calculation of how much gas is being used in this data sending down to L1. It's calculated there. It's nice. 

That times the oil and gas prices since that is data that's being sent to L1. L2 has to pay the oil and gas price for that. 

As we mentioned before, how much gas that is as a function of how much of the input data is there? Yeah, and the auto execution fee seems straightforward as well, just how much gas is being used on aisle two. 

And then what are the gas prices of Layer 2 gases? Yeah, to bring it down more, what it looks like (Dune mentioned already) for Optimism, this oil and gas price is modeled after this L1 fee. 

So, you can just take that and add the traditional gas use price. It is L2 gas-used (LT gas price). I’ll get into later how using that L2 gas use can introduce some pretty interesting things. 

And in Arbitrum, there's an effective gas price times the gas used fee. There's like a separate gas accounting there. But both of these things, get you to the total transaction fee on our right. 

Yeah, so it's decomposing a little further. We maybe touched on this a little bit already. But when we talk about L1 fees, what's actually happening under the hood?

So, we already talked about gas use gas prices. There is a fee scalar in the Optimism fee calculation, which historically when Optimism first launched that v scalar was set to 1.5. 

There was also an extra 50% margin charge on top of all transaction fees that were earmarked to fund public goods since then as compressions come through with EDM equivalents upgrade like other costs. As much as I want to decrease it a little bit, that scalars come down as well. 

So, the scalar is currently one, so it doesn't really impact gas price calculations now, but yeah, here's the full decomposed thing. He even decomposed oil and gas use further into call data plus overhead gas plus noncall data gas. It is just a continuous nesting doll of gas to get to the actual final number. 

And then L to execution we've talked about a few times already, but with the L1/L2 gas prices, usually, the floor is like .01 Gwei. Rarely, it will rise above that but usually not by much. 

So, with this L2V, we'll see it’s kind of negligible. Cool. So, here's kind of a good visualization of that. This is a query that you'll see on the big master dashboard. 

Yeah, so this is taking, like decomposing, this transaction on Optimism, taking off the total transaction fee. What share of it is due to the L2P> what share of it is due to the gas use and the gas price?

Suddenly, there's some random kind of variability. But over the long run, you'll see that the total share of one fee for an autism transaction is like 90 to 99%. So, that's pretty large. 

And where we say like the L2 costs, so far, it’s kind of negligible. So we talked about things like call data compression or any other kinds of strategies. Yet, at least in my mind, it’s low-hanging fruit. 

Here's how to lower fees more. Take the L1-fee portion and try to reduce that as much as we can. Cool. Yeah, it's asked me to kind of look today like gas prices on L1 are low. So you look at it like you are doing a trade on an L2O. Now, you know, it’s still fairly cheap. 

The average just yesterday for Arbitrum was about nine cents. For Uniswap trade options, it was about 16 cents. But if I want to keep reducing fees, where's the focus? Click off the screen. There we go. 

See, as I mentioned before my opinion, low-hanging fruit is trying to reduce that L1 portion. There are a few ways we've kind of seen these fees get reduced so far. 

The L1 is just, in general, increased transaction volume, in the sense of…if you think about submitting batches to L1 the larger batches. But each transaction batch submission, now, that is executing a contract on L1, so beyond the call data use. And there are some fixed costs for executing that transaction. 

When L2Vs are calculated, the cost called an overhead cost is kind of amortized across all the transactions in the batch. 

So, you see here on the left is just a sample of Optimism transactions where the x is how many transactions are in the batch, the Y is how much gas is amortized over, or how much gas is used per transaction. 

And it's pretty fun but steep decline. Yeah, there are more transactions, and the gas cost goes down. The right trend talks about already…you can compress the call data and probably continuously get better at those algorithms and make that L1 share smaller. 

So, these are things that have kind of happened in the past and that have continually worked on. Yeah. Other things kind of coming up recently is how we reduce the carbon footprint. This is one that's kind of still getting up to speed as well. 

But just to mention here, there's a pushout for EIP 4844, which is saying, rather than posting my understanding of that, rather than posting all the transaction data to L1, you post some cryptographic commitments to L1 pointing toward the data. 

The data is stored in the nodes temporarily then eventually pruned just ways to, this in my mind. I was thinking, “How do we do this optimization stuff?”

Like up here? And then another solution was like, “Well, what if we just work on the other one?” 

So, this tip has kind of been a combined effort between OP labs and Coinbase. If any foundation or your other individual contributors as well, but yeah, definitely a whole rabbit hole deep dive to go into. 

Okay, that's today. So last thing to touch on, which I think is very cool about the Optimism design and just kind of communicating the L2 value prop in general, is you want to show the gas used field on Optimism before. This is the oil to the gas used field, and it was a cool property of the alternative design that that L2 gas use field is equal to (how much gas that same transaction does on a L1). 

So for example, like in ETH, transfer is 21,000 gas on L1. It's also 21,000 L2 gas and L2. So, you do the same thing with a Uniswap trade or any other L2O app. So, that's pretty cool. Um, then we have the oil and gas price as a fuel pass by an Oracle. 

There's this kind of cool thing you can do to calculate if you want to do…hypothetically say how much ETH has been saved by L TOS, or how much cheaper is this transaction on L2 than Layer 1? 

You have this pretty simple calculation to do. So, similar to like Layer 1, you just do gas use times price here. We can do L2 gas used times the L1 gas price that gives us for a given L2 transaction. 

How much cheaper is it on Layer 1? Only the table dashboard to show what looks like an actual as well. And you can go the other direction also.

Definitely some more assumptions there but cool. Valley probably can use the “how much call data” L1 transaction uses. It kind of simulates this L1 app…this DAO vote…whatever how much that costs on Layer 2 are…I think it’s been beneficial so far for communicating what a value prop is for an app to migrate some of its usages to Layer 2. Cool. 

So, as I mentioned, these things are probably easier shown live. So, I could jump over to call this work. This dashboard. Scroll? Yep. 

We'll jump on into this table now. This is kind of talking about before using all the information of a transaction on layer to simulate how much of our L1…so see, I can jump into a little bit here. 

Naval titles first joined what type of transaction it is. So, here it’s a staking transaction on pika. Where's another good one? Chain Oracle token approvals. Perp doing FERC stuff. Yeah. So, if I wanted to? 

Yes, as I mentioned before, these are just transactions coming from the opposite transactions table. So, we want to calculate what the transaction fee is the L1 fee at the gas used gas price adjusted for my…

Boxer  36:36  

Can you quickly zoom in on your screen a bit and maybe scroll a bit slower? The stream is not catching up to your streaming and scrolling speed. Okay, yeah, that's better. Thank you.

Michael  36:58  

Cool. Yeah. Okay, I'll try to stop scrolling. I scroll a lot. So, if I wanted to calculate what the Layer 2 gas fee is…show it before title…which aggregates all this other stuff together. Add that gas used L2 gas price, decimal adjusted. There's your ETH fee. 

Similar to what I mentioned, take the same exact transaction. Find out what the costs are in just this field instead of doing all this for L2 gas prices called data fee. 

The addition is…take the actual USD, multiplied by the oil and gas price. Do what that fee would be on L1. And then there's just like a building block to do the kinds of things to try to not scroll fast. 

Let's do all kinds of things. Like how many times cheaper is it? Or how much what's the share of the fee? That's a one-for-two. How many times cheaper? Is it? 

Yeah, all these kinds of things. So, I think it’s like a cool example here of how some apps are built to be savings way, way higher. 

For finance actually mapped here, we'll type in a few. Yeah, so one example here. So Uniswap trade, you know, if that's migrated from L1 to L to take this transaction here, this transaction costs and organizes 18 cents. The same transaction is 538 on L1. 

That's about 30 times cheaper…probably likely affair lines contract for L1, but to some of these L2 native ETH, something like Lyra, which is a protocol. 

This transaction here six methods not decoded, but constantly intense, and Optimism would be $42 on Layer 1, like over 300 times cheaper. 

I think an interesting example that has contributed to this is that this app uses so much L2 to Uniswap trade. If you compare those two things, a good example of some types of that is how much they do behind the scene which would not be feasible. No one likes to imagine today's gas prices as paying $40 for a transaction. 

But on Layer 2, because of just how things are constructed. It's definitely much more manageable at 11 cent as a transaction fee today. And there's also this other dashboard here, which we have kind of mentioned. There are a lot more assumptions going on, but it’s kind of going the opposite way of taking any one transaction hash and saying alignments a little off, but you know what it costs on Layer 2 on Layer 1 today? 

The cost that Layer 2 projected the call earlier is a big contributor to that. And then, you know, make that case as well. 

So, I can go into this…we have time but definitely want to make sure we have time for q&a and other things. In general, you're interested in Layer 2s and all this kind of stuff. 

Yeah, throw out some other interesting ways to get involved. So, we have an analytics channel on the autism discord, other apps and tools, and definitely a very active analytics community as well. 

And then yeah, I mean, any kind of spells or cool dashboards for any leader to be listed. A few ideas here, whether it's like NFT, wide stuff, bridges we didn't touch on. But bridges are a whole other crazy sector, like probably more complicated indexes. 

That's like definitely hard to wrangle. And yet, there are some things about 4844 and the other ideas from today. I was watching some of the DEF CON talks yesterday, and it's definitely very hyped. Very exciting. 

So, I definitely recommend watching some of those if you haven't already. Yeah, that's the end of my long-winded ramble.

Boxer  40:53  

That was a great ramble. Thank you.

Jackie  40:56  

Yeah, amazing. Very juicy content. Yeah.

Michael  41:03  

A lot of stuff is being thrown out really quickly. Yeah.

Jackie  41:07  

Yeah, I thought it was gonna be a roller coaster ride. And it was a roller coaster ride. A lot of stuff.

Boxer  41:16  

All right. Yeah. I certainly learned something. Maybe before the q&a. Some quick cheeky questions. Are L2 deaths getting lazy? Because there’s not as big of an incentive to save? Yes.

Michael 41:33  

Oh, yeah. I'm not sure. I definitely can't speak to that. But I do know that there are some. Yeah. Yeah, who knows? 

Therefore, as some examples, there's been some apps that optimize their contracts for how an L2 calculates transaction fees. So, there was a call data compression or a dex that did call data compression at the app level before Optimism had it at the protocol level. 

So, they were getting that thing at a time when transactions were like $1 or something. They were doing transactions for a few cents. And they, the three, I believe we’re also doing something similar. 

I think they announced this when they designed their contracts in a way that uses less call data. So, try to optimize for Layer 2 fees. 

That's definitely been a pattern that's happened. I think you see it with some. Trading bots will do some similar things as well as with trying to optimize their call data. 

Yeah. It's hard to say. Yeah, it will be similar to building a contract on Layer 1 a few years ago. You're like, who cares about gas fees? 

And then all of a sudden, oh, man, your gas price matters or your house? Your gas? These matters? I still early to say, but yeah, I don't know. Just kind of speculating myself.

Boxer  42:55  

Yeah. Okay. So, we'll just assume they're not getting lazy. Yeah, I mean, they're making the best use. They're making efficient use of their time.

It's like why optimize? Gas costs, if you can, in the end, are like five cents on a transaction? Like maybe that's not really the way to do this. All right, there were a couple of questions throughout the stream. 

We have this beautiful question from all the way at the start, which was always the median gas between December 2017 and March 2021. So, different even though the daily transactions are roughly similar. 

I think this was in Jackie's presentation part. So maybe Jackie, if you can share your screen, and we can take a look at that. And maybe we can make sense of it. Yeah. So, your screen is back online.

Jackie  43:44  

Thank you. Yeah, I think we were talking about this graph. I will make it bigger. This is big enough.

Boxer  43:53  

Yes, zoom, scroll down a bit. So, the daily transactions and the median gas hmm….

Michael  44:03  

I have a speculative guess. Yeah, so I wonder. I think you'll see Layer 1 transactions, especially recently, they kind of like our capping around a million or 1.2 million per day, even if you look at just the amount of gas used on the theory. 

You'll see it's pretty much at its cap now. So essentially, there's more demand for gas than there is supply for it. 

So, I have a long way of saying that. My interpretation is gas price — more of a measure of excess demand versus current transaction volume. That's just my mental framework for it. There's more competition again…a block gas price would go up.

Boxer  44:45  

Yeah, I subscribe to this notion. There's only a certain amount of workspace, and if we go over that, we can only fit so many transactions in a day into the blocks. 

So, if there's excess demand, the median gas price can still go higher. So, it's like a function of excess block space demand pretty much.

Michael  45:07  

Yeah. And there's a cool property too about Layer 2. You'll see in that shared dashboard as well, but there have been a few days so far, even though the transactions are still like 20. These were some small percentages of Layer 1. 

There have been days where the Layer 2 gas and Optimism is greater than the Layer 1 gas on Ethereum. Just kind of a sense of there being way more. I don't know what the limit is but more space for gas usage, like via Layer 2s. Just the same limit on there, one price for that is very confusing, but…

Boxer  45:41  

Yeah, no, the total block space grows pretty much, right? All right, then there was…when Jackie was explaining Zika, Roelofs, and optimistic roll-ups…there was a comment on the optimistic roll-ups post-data on L1 Zika roll-ups only post-proofs, and that is correct. 

I think, Jackie misspoke there…or sorry. Yeah. Like to be clear or confusing. Yeah. It really is. Optimistic roll-ups actually post the whole call data onto the L1 ZK roll-ups actually just post proofs. Just to make that very clear. 

Then, alright, what else do we have? Yeah, I discussed it with someone in the chat. Maybe we can bring up that in Michael’s part of the presentation. There is a one gas per mile transaction chart.

Michael  46:41  

Yeah, you're not worried? Well, I was trying to show there is…can we get to that? Seriously, so I think it's very bottom left. Yeah, very bottom left. So, that's labeled orange as the color scheme. 

But that one? Yeah. So, it's trying to show it for a given transaction on Layer 2. How much gas is it actually using on Layer 1? So, the batch size in the transaction chain is like how many transactions are in that batch? 

So yes, this was getting at…yeah, for a given Layer 2 transaction. What's the size of its Layer 1 footprint? 

Boxer  47:26  

Yeah, he was asking, “Isn't this a lot, and you need to account for the currency?” 

So, this is actual gas computation units and usual like gas, like ERC 20 transfer. If I send Jackie 100 $DAI, then I will pay 21,000 guests computation units. 

If you think about how complex some of the transactions are on Optimism, as we saw later in Michael's presentation with lira…spending a shitload of gas…which basically translates to doing a lot of computation on-chain. 

Then 2.5k gas is pennies. It's like 1/10…not even…1/10 of a…no, it's 1/10 of a usual ERC 20 transfer. So, it's like 70 cents or something. It's a really good price. 

Of course, depending on how expensive L1 gas is at the moment (because that's an outside influence). But still, if you pay like 2.5k gas, even if the gas price is like 100 gwei…you're still like, that's pretty cheap.

Michael  48:37  

Yeah, yeah, definitely. When I started looking at Layer 2, it was the first time I really ever thought about things in gas units. I forget that I'm now just used to gas units. And otherwise, it's just like a random number. That means nothing.

Boxer  48:48  

Yep. Yeah, it's always, always hard to get into these details. Like if you're all the way zoomed in and inside of this detail, it’s sometimes hard to explain this. 

We have folks up asking, “Do we have any data sets that are visibly unchained for the fraud proof and dispute flows? Or do they just have one minute?”

Michael  49:13  

Yeah, so I'd say right now, fault proofs are not live in production. There is a repo called Canon that was like, developed by geo hot who was the iPhone jailbreak guy. Now finally, comma AI. 

So, he built the first implementation of a fault proof for Optimism or what an alternative fault proof could look like. Getting those things into production are definitely necessary steps. Yeah. L2P has a good rundown of all the different kinds of security properties and different roll-ups today.

Boxer  49:50  

So, to answer this question, we don't really…okay.

Michael 49:55  

Short answer, yeah. But they will happen on date night.

Boxer  50:00  

But where are they currently connected within the Optimism validator network thing?

Michael 50:05  

Yeah, so currently, there's only one sequencer. So, it's like Optimism, sequencing and transactions at once,  this is my understanding of things. I’m not a protocol engineer. 

But my understanding is like once these…yeah, like sequence resets are more decentralized, and yeah, you need that mechanism.

Boxer  50:27  

Yep. Yeah. Makes sense. Then we have another one, can you comment on the utility of layer tools after maintenance including routing?

Michael  50:38  

My understanding is that things get cheaper again. As Layer 1 gets cheaper, Layer 2 gets cheaper. If you think of it, you know…I guess regardless of any future upgrades if things stay at the status quo today, my framework understanding changes as far as how a Layer 2 transaction fee is, what D is like comprised of, if sharding makes the Layer 1 fee decrease, which should also make the Layer 2 fee decrease.

Boxer  51:08  

Okay, but I think the quick loop, the person who's asking this question is asking, “Do I need Optimism if there's like native sharpening within Ethereum?” I think of course we do.

Michael  51:24  

Yeah. Yeah. I mean, I definitely subscribe to the idea that people always want things cheaper if they can. But yeah, we'll see what happens in the future.

Boxer  51:37  

Make sense? I don't even know what this is. Maybe we talk a bit about immutable X. Does anybody have like something smart to say about this?

Michael 51:49  

Other people probably do.

Boxer  51:52  

We like data with data people. We're not protocol people. Alright, and our favorite Data Analyst tool, they'll be asked, “Oh, man, I can't. Yeah, what excites you most?”

Michael 52:09  

Yeah. Let's see, I think I like to put it all together. I’m excited about just generally lowering the barrier to doing things. 

And all these different, one sides is lowering the barrier in terms of transaction fees. How many more use cases can enable…I think a lot of these things we're doing at the Layer 2 level and like other things to come to…you are just creating some base primitive that people can build on top of. So, there are lower transaction fees today. 

What was an asset? Bogota, like DEF CON yesterday's…it's like OP stack idea. We're kind of embracing that. 

You know, people can take the OP codebase and change some modules. Whether they want to post data somewhere else or make the execution operate somewhat differently, and build crazy games and all these kinds of things. 

So, I think there's more excitement in just opening the surface area of what's possible. And yeah, seeing what can be built on top of that. So a very vague answer, I guess, but yeah, I think watch some of the DEF CON talks, and you'll get hyped.

Boxer  53:19  

Yes, I teach a bunch about this in Berlin. And I already like things that just click in place if you think about it long enough. It's very exciting. So very, very much looking forward to what people build on top of the stack. 

Really, the modularity of it is very interesting. All right, so I think we're on time. So, thank you very much for visiting a Dune-hosted podcast, Michael. 

Always a pleasure to see you, and also, oh yeah. Do we actually send out max for this? I think we should. But alright. Yeah. Thanks for coming on.

Jackie  54:05  

Thank you.

Boxer  54:07  

Yeah. Thanks for having us. Yeah. Thanks for the great presentation. Both Jackie and Miko. And we'll see everyone again in two weeks. Do we have a topic, Jackie?

Jackie  54:18  

Yeah, we are waiting for what Web3 stuff will happen in a week.

Boxer  54:22  

Okay. If anyone wants to be a guest, slide into Jackie's DMs. We'll figure that out. All right. Thank you, everyone. Bye bye.