Alejo: We have a very special guest this week: Robert. He’s one of the world’s foremost Move auditors and security experts. He has audited a lot of projects and core protocols. We’re very happy to be working with Robert and we’re really excited to have him here. He does stand-up comedy in his free time, so maybe we can see that side of him as well. Let’s have Robert go ahead and introduce himself.
Robert: Thanks for having us on! I don’t have any jokes prepared at the moment but maybe I’ll think of some as we go. I help run OtterSec, a smart contract auditing firm. We look a lot at the internals of Move and try to have a deep understanding of both the VM layer as well as the Aptos specific SDK layer. With that, we try to be able to guarantee certain aspects of security. In the past, we’ve worked closely with large institutions like Alameda Research, Jump Crypto, and we also have container arrangements with Pontem Foundation to audit their core code. I’m excited to be in this space to talk more about what makes Move such an exciting language, and also how we’re working with Pontem to secure their applications because you have a good focus on security.
Alejo: Awesome, thank you. Maybe we can start there. Tell me, what excites you about Move? How is it different, and how is it the same? What are some paradigm shifts that you see, especially in the area of security?
Robert: This is a really good question. This is actually a bit of alpha, but we’re actually going to publish a blog post on this very subject in probably a few days. I think Move as a language is very exciting. It’s one of the main sells for moving onto Aptos. I would call Move a Rust for blockchain; it’s more domain-specific, which means that it’s able to avoid a lot of the general purpose issues that Rust has. For example, you don’t need to manually serialize or deserialize data when you’re using Move. It’s a much higher level language that gives developers more tools to be able to write more secure programs.
Is it ok to get technical here? I think one really cool specific aspect of Move is the VM is extremely [versatile.]. One example of this is that references are natively available, like they’re a first class value in the Move VM itself. So you could actually, for example, in the bi-code, push the reference on the stack and you can also dereference a reference directly. An implication of this is when you want to call a function, you directly pass in a reference. In general, Move is a much more domain-specific, blockchain-specific, high-level language, which I think resolves a lot of security issues that we see in normal programming.
Alejo: Let’s decompile that little bit more, what does it mean to be a first class value? What’s the analogy?
Robert: That’s a good question. So, first class just means it’s supported natively by the virtual machine. In contrast, deserializing data with Rust, for example, you need to be able to have some sort of intermediate representation for that value, or some sort of representation to convert from the underlying bytes into what your program uses. You need to have this intermediate step where you convert from like the raw binary to usable Rust value. On the other hand, Move just ignores the step completely if you can just use the value directly.
Alejo: So help me understand this a little bit more. Is ETH native? Is the Compound or UNI token a first class value? How does that compare on Ethereum?
Robert: For ETH, you would have a contract, and then you would need to have a back end to load the values, so you actually don’t have first class access to the higher level structure itself. You can’t really pass in a reference to a token, for example. You would need to call the contract in order to receive the value. There’s still that intermediate step, but it’s a bit different because all contracts’ states are stored within the contracts themselves.
Alejo: So I have to call a contract like an ERC-20 contract?
Alejo: Is it more efficient as well? On top of safety, do I save on gas?
Robert: Well, I guess that part is sort of separate from the gas savings. I think Move as a language is designed to be very fast. It's scalable, so the cost for users should also be lower. Although gas is kind of like a VM native concept, so Move gas is denoted in entirely different units than ETH or Solana gas.
Alejo: Let’s make this comparison a little more abstract. What would happen if we were to put these side by side on something like Cosmos or Polkadot, which one would do better? The Move or the EVM? What aspects should we be tracking?
Robert: In theory, you could write these contracts in any language. There are many examples of, I'd say, probably pretty safe contracts across Ethereum or any chain. But I think the main point for me is that Move is safer by default. It lets you write safer contracts without trying as hard or without paying as much attention.
Alejo: So what you’re saying is that it’s way harder to shoot myself in the foot, essentially?
Robert: Hopefully! No guarantees. Ideally, yes. I know Pontem cares a lot about security, but to be honest, I think there's a lot of smart contract developers that either don't care a lot about security or don't understand the full implications. They just write code that they deploy to mainnet as fast as possible, and that often ends up shooting themselves in the foot.
Alejo: So how can people, especially the end users, really make sure that they’re not getting affected by the teams and developers doing things like that? What are best practices out there for projects? What should our users be looking out for to make sure that they're safe in the metaverse?
Robert: I think the primary thing you can do as users is to stay vigilant. There are a lot of projects, but it’s kind of important to ensure that the ones you interact with are actually reputable and safe and have gone through the process of working with a professional audit firm and getting their contract audited. I think that's probably the best way to mitigate the risk of security vulnerabilities. Of course, audits aren't perfect, so there's always a chance that they can miss something, but the general position or the team's posture towards security is really important. Just having a team that cares about security makes me a lot more confident in their protocol than a team that just really wants you to push your stuff to production.
I guess one thing that you can look out for is the commit on the Github repository versus the commit on the audit report. If it's different, you should maybe ask a few more questions about if there's an ongoing engagement with the team, or if you know the protocol is just pushing stuff to production or to Github without really consulting on those additional changes.
Alejo: Is there some way that we as a protocol can maybe trustlessly do that? Is there some attestation you can make as an auditor to say that the version that’s live right now on Github is the one that you audited? Could it be in a wallet potentially, or some place where we can see attestations or signals?
Robert: On all audit reports, and we’ll be producing some for Pontem soon, but in all our audit reports we have the commit hash, which you know was stamped by us, which we reviewed. But the wallet integration is a good idea, and we are talking with a couple of wallets. I think it would be really cool if, natively in the wallet app, there was a way for users to see that this code on-chain is actually the exact one that was approved by an auditor. And that could be done, for example, by just comparing the hash of the code on-chain.
Then ideally you could have some sort of option even, where if users want to go fully trusted mode, they only interact with contracts that have been completely verified. In this way, you know too if the contract has changed underneath you.
Alejo: That’s a really good idea. We’re definitely going to figure out how to make that a feature and add it to the roadmap. This is also a good pivot into wallet security. Given that the wallet is the interface into the metaverse, tell me more: what should we be looking out for with wallet security? There have recently been hacks. We'll address the elephant in the room of what happened with Solana. Maybe you could give us some of your opinions there and how users and projects can protect themselves, as well as the users? What do you think about the wallet as an interface, or getting rid of the browser as an intermediary?
Robert: There’s a lot of questions to unpack here. The first is security. I personally think that wallet security has kind of been flying under the radar for a long time. People care a lot. They see a lot that these dApps have been compromised potentially, but prior to this Solana event, no one really cared if wallets had been audited. Which, if you think about it, doesn't make that much sense, because wallets are essentially your bridge into this entire world of crypto. If that centralized bridge gets compromised, then everything that you interact with also potentially could get compromised. It’s like a single point of failure to everything that you interact with.
Specifically talking about the Solana events, we are actually working with the teams involved. We have retained the Solana Foundation so we’re working with them trying to figure out what actually happened. We did publish a report on our findings which you can find on our Twitter account. I think the summary of it was there were mnemonics or secret keys that were logged accidentally. We suspect that that could have led to the compromise of user keys. I think the takeaway from those events specifically is that you have to be very careful when handling sensitive user data. You have to make sure that that data isn’t ending up somewhere accidentally.
It was actually a pretty subtle accident. They weren’t intentionally trying to log it. It just accidentally got stored in an object which ended up getting sent to the server. There were several layers of abstraction that they either didn’t really take the time to figure out or they didn’t notice until it was too late.
Alejo: Is that something that could have been caught in an audit report?
Robert: Yes, but I think you'd have to be looking for it specifically. I think for devs, you have to be very careful when you’re doing this kind of stuff. Either you need to have an external firm who's specifically focusing on what to look for, or you need to spend a lot of time looking for these kinds of edge cases where sensitive user data might be passed around in ways that you don't intend it to. I think this is a wake up call for wallets in general. There’s a lot of wallets and to be honest, a lot of them don’t care that much about security. We’ve actually found and reported issues to a number of wallets outside of Phantom. While it is a bit concerning, we've also seen some of the wallets that we worked with in the past have been improving their posture. Hopefully with the events on Solana, there will be more of a focus on security on the wallet side as well.
Alejo: This central point of failure is actually pretty scary. Think of how many people use Metamask over other wallets on Ethereum. How do we solve this problem? Is there such a thing as a decentralized wallet?
Robert: Personally, I think the solution would be to, as users, push for more of a focus on security. Only work with wallets that have a strong security partner. Ask questions about what the wallet has done to ensure its security. Potentially, you could have a decentralized wallet, but I’m not really sure how that would work. One consideration is that the code for the wallet needs to be somewhat centralized. I’m not really sure it can really have a decentralized wallet code base. I mean maybe you could have multiple front ends or something, but there’s always going to be a single point or single points of failure for the user.
Alejo: So do you think wallets should be as thorough in their audits and as public with their reports as smart contract dApps?
Robert: I think wallets are just as, if not more, important than dApps in terms of their security. Users need to be confident that the wallet they’re using won't accidentally sign a transaction that they don't want to sign and it correctly shows all the transactions. There was some interesting research that someone else did on if you could figure out, as a contract, that you were in a simulation, like an RPC simulate call, and you change your behavior based on that. Of course, you shouldn't really be interacting with contracts you don't trust, but it's also an interesting question: can you really trust what is happening at the simulation level? Although I don't know if that's a wallet vulnerability, per se. I'd say that's more of a VM design concern. I think wallets definitely need to care a lot about security and users need to push for the security of the wallets they use. At the end of the day, the need for security has to come from the users. If users don't care about security, then wallets are not going to care about it either. But if users demand security, then wallets are going to have the incentive to get their code audited.
Alejo: I just had a thought. What if people just download and run the wallet themselves? It’s not sending any information anywhere. Could that potentially work? Or maybe some framework where, even though you might need extra compute on your computer or phone or whatever, you don't need to trust any third party to send stuff to you? This is part of the ethics on Web2 versus Web3, right? Like Web2, you want to track everything. You want to see where the user is clicking. All these logs can be useful for improving the product, but it's also a double edged sword, right?
Robert: Yeah, definitely. I think even in Web2 there's a huge amount of concern around loading sensitive data. Although the issue in Web2 is that it's not a direct loss. Even if you steal a bunch of usernames and passwords, you don't have the direct access to money versus Web3. If you have someone's wallet private key, you can just drain their funds immediately. Which is why I think Web3 is much scarier.
Alejo: Yeah, that makes sense. This is really good feedback. Honestly, with all websites, we probably shouldn't be logging anything, emails or meeting IP addresses or anything. We shouldn’t be sending them out to any AWS server even though I know it's useful. We can talk about that: how do you make the simplest thing that collects the least amount of information possible so that there's not even a leak because you're not collecting anything?
Robert: I think that’s also a good idea for users. Ideally for users, you don’t want your data tracked by some third party provider. So minimizing that has also had the unintended positive side effect of minimizing this broader issue of corporate tracking of users.
Alejo: There’s even concerns here beyond just leaking your personal information. There’s also potentially MEV, maximal extractible value. This centralization, or central point of failure, means that these wallets have a lot of information that could be very valuable for traders that might be trying to make some money based on the trades that you're making. People are probably familiar with high frequency trading, it's Web2, it’s traditional finance, the same issue of what people call “front running”, or “back running”, or “sandwich attacks”, or any of the other forms of MEV. I wanted to hear your thoughts on this as well.
Robert: Yeah, I think definitely if you have some information, potentially that could
allow someone to exploit that. Although, at least from my experience it seems that most of the profits are either looking at the mempool or they observe what’s been happening on chain and they try to use that to profit.
Alejo: Don’t the wallets act like a transmitter of this information? I want to hear your thoughts because it does seem like they're kind of like, not a gatekeeper per se, but the transactions go through them and they get to choose which validators pick them up.
Robert: Yeah, I think payment order flow for wallets is definitely pretty interesting. I don’t know if any wallets can do that right now, but I haven’t looked too deeply into it, so it’s possible. My initial thoughts are that I feel like the validators probably wouldn't be the ones doing MEV. It would probably be external searches. So if the wallet were to do that, it would connect to some marketplace or something where it could stream the transactions like flashspots. I also don’t think wallets should be doing this to be honest. It violates the trust between them and users.
Alejo: I think that the argument is that someone’s going to be doing it because they know that it’s flowing through there. Why not have it be transparent? I guess that's the argument for flashpots. This is happening, so let's just put it on-chain and allow an efficient market to evolve around it. So maybe we allow this type of MEV and not this type of MEV. It’s the Wild West out there. How do we create a safe saloon where if you’re going to get front-runned, at least you’re getting paid back for it? Because someone’s going to do it, why not let the cowboys that have your best interests in mind let you do it?
Robert: That’s fair. I guess I’m also not sure which is more important, the order flow or the ordering on the block. I think they're both important, but I think most of the rewards would end up going to the validators themselves because they control both the order flow, because they receive all the transactions, but also the ordering. They control what block actually gets produced. I think that the validators would have them not play on that.
Alejo: The issue is if everybody’s using Metamask and Metamask is, by default, sending it all to the same validators, that seems like a conflict of interest. Maybe there's a way to kind of recognize who you send the transactions to, but then that might also make your transaction go by a lot more slowly.
Robert: I just think that the optics of payment order flow, there’s a huge mess on Reddit over that. I just think it’s not a great thing to do. I think it's kind of risky in terms of how users perceive it.
Alejo: I see two options: you either charge some form of fee so that people kind of pay up front for the AWS servers. Or people don't see the fee, but they're paying a spread, let's call it. Either way, you just need to be transparent on what the costs are and then let the users decide.
Robert: I think as long as users understand and know the aspects of what is actually happening, I think that’s totally fine then. It just needs to be properly communicated to users.
Alejo: Cool. Let’s change gears here. We have a lot of questions that came in. I don’t know if we’re going to have time for all of them. Let’s let someone up to speak.
Jeff - Speaker: I love this topic about security, especially for a new platform because my background is in education, which I then shifted into the Web3 space. I’m running cohorts and training developers and one of the things we talked about from day one is every different way that you can create secure contracts and infrastructure. The best answer to this question is, of course, you want those audits and checks, but you have to teach the developers from day one a fairly standardized way to be testing everything along the way. As much as we want to have that important moment of shipping, you have to find that fine balance of shipping fast but also making sure you're shipping strong good code that's safe for the users. Because one misstep or like you were saying a few minutes ago, you know someone accidentally does something in the code and it's not caught you're talking about billions of dollars. So I think it starts at the beginning with the way you train people to be security-minded, as much as they’re ship-minded.
Alejo: That's a really good point that I think about every day, because coming from a traditional tech background, the ethos is to move fast and break things and then learn from those mistakes and make the process better. But here if you break things, it’s not good. So, I've had to contend with this a lot and would love to hear your thoughts, Robert, and also yours Jeff.
Jeff - Speaker: I'm with Web3 Builders Alliance, and we are currently on Cosmos. We are coming to the Move networks and Rust-based smart contracting language. We offer a Masters level cohort, not for beginners, for people that want to be senior developers.
Alejo: We'll actually test the Move VM on Cosmos in the future. We forked the Libra VM, made it Wassum compatible, and played around with the Cosmos SDK. We also have a Polkadot Parachain. We’re obviously happy to have you here on Aptos, but we'll be coming to Cosmos soon, too. But on this topic of moving fast and breaking things quickly, actually from the Kusama/Polkadot ecosystem, I learned that you can actually test, as long as you tell everybody that everybody that's going to be chaotic. I like this idea of having maybe two versions of apps. One version is immutable, unchangeable, you trust that nobody has any form of multi sig access to it. Nobody can change it. There might be stuff wrong with it, hopefully not because it's gone through audits, and that's your production ready version. But what about a fork of that that’s upgradable where you can ship things fast? You can tell everybody, “Hey, this thing might not be fully audited, we're testing it out.” It would allow us to move quickly, test things out, experiment, break things, break not only the code, but maybe even the game theory behind some of the assumptions that we have. So kind of curious to hear your thoughts on that.
Robert: That’s an interesting design. My primary concern would be, how do users know which one to use? I think the issue is that users don't really know how to evaluate these risks. Even with smart contracts, it's very difficult for you to know what the probability of getting hacked is. We don’t even really know. I think in this case if you have two versions of the same app, and one is safer and one is not as safe, I guess users that are willing to take on more risk could use the constantly shipping one. But I think realistically, if any of them got hacked it would still be kind of a PR crisis to deal with.
Alejo: In that sense it’s more so positioning, you know, “this-thing-will-get-hacked.com” vs. “this-thing-is-not-going-to-get-hacked-(hopefully).com”. Just put giant red disclaimers everywhere that say use at your own risk, don’t put more money in than you’re willing to lose, etc.
Jeff: The incentivized testnet is a good model, too. If you incentivize people to participate in your testnet and to give feedback, and you reward them with useful tokens, you’re feeding into the user base. It’s still benefitting the projects because even if they want to gamify using the testnet to get more tokens, at least they're using it and everybody's winning. You're getting feedback and they're using the testnet. Their goal is to break stuff. They win more tokens in the incentivized testnet by trying to break your stuff. You feed into the users’ emotional needs. Look, the more red you put on a screen saying “Don't use this” the more the total degens are going to come in and use it. It’s better to just say “Come and use it and break it and we’ll give you some tokens.”
Alejo: And you can even put bounties on the other, like, Hey, if you do break it, there's this stake, right?
Jeff: Totally. The thing about the word “bounties”, though, I mean, psychologically, you know, I say the word “bounty.” I wonder how many people on this call think, well, you gotta be a dev to do that. You don't necessarily want only devs that come in and try to break your stuff, but you also want regular users to haphazardly use your stuff and try to break it from the user standpoint, not a technical standpoint.
Alejo: The way that I think about the architecture of this incentivized testnet, there's two ways you create a separate network. You could spin up your own nodes. I think the issue there is that if there's no value flowing through it, there’s no spam protection. So this is why I like the idea of just having it shipped on somewhere where there's real value, like the Kusama idea. So kind of curious to hear about that incentivized testnet, but maybe in production or live.
Jeff: There's a couple different versions of how this looks. And look, there's no real value because they're testnet tokens. The way you create value is if there's an input of somebody submitting feedback or “I broke your stuff,” the output is–
Alejo: The difference is if I launched it on Aptos mainnet, let's say, versus on a devnet, then people on Aptos devnet can't spam because they need Aptos tokens to send transactions. So let’s say a DDoS attack on your devnet is somewhat impreventable. People just request tokens from the faucet or you need to do KYC/AML, or something like that which is very onerous and could potentially have privacy implications. So how do we launch this thing in a real environment where the spam protection is there? Potentially, this is where I think you need some form of real world value, for these things to just take on a life on their own. I think it needs to be live, in the wild. You can't keep it in the lab, otherwise you're not really testing all the things that could break and you're not benefiting from the spam protection of it being live in the wild. Sorry, I interrupted.
Jeff: No, you’re totally fine! For me, a great question leads to innovation. I haven't thought through what I’m about to say, but I mean, perhaps you play with some sort of made up token. I know that’s total blasphemy, but just by using it, you’re gaining points that can later be redeemed for real value. So, for instance, you launched this thing on mainnet and you can get faucet funds but it’s limited, you can’t get a billion of them. You go in and play, and you gain points for playing and once you hit a certain threshold, you get a prize that you can trade in for real value at some point. Now this isn't going to attract your DeFi degens, because they don't want to play games, but it's going to attract enough people that do want to play to get you some feedback that you might not otherwise have. It’s a problem that could work.
Robert: I think the concern could potentially be around if that could be viable. If you have a token that has real world value just by using it, it might be possible to fake that usage somehow.
Alejo: I think you just want to prevent the bots from spamming whatever mechanism you have. The spam protection is the real world value. It costs you money. So I think this idea of tokens as having real world value is obviously more philosophical. Say we form a bitcoin clone and we just launched it into the world. What is real world value there? If someone can spam attack it, then that’s probably not real world value. But if someone needs Aptos tokens to spam attack it, then you know at least you’re starting to tie some real world value into it. Let’s say I’m a bot, I spent $1,000 in gas getting these coins, and now this thing is worth $1,000. Again, you launch this into the world and see what happens, test it out. I think you do need to connect it to bounties for bugs and vulnerabilities.
Robert: I think another central concern is that a lot of bugs are edge cases, so it's very difficult to find them unless you're intentionally reading through the code. Sometimes it might not even be possible to find them through the UI. Sometimes, it might be that the UI is safe to use, but if you modify the UI a bit or you make your own custom contract calls, you are able to exploit.
Alejo: I guess one use case that I’m thinking of is this: Let’s say Aave is working on a v7 and they're thinking of some complete new innovation that no one has ever done in the world. They can get 10 million audits, but you need it out in the world for someone to figure out what would happen if I do this loan with this token, and then bring it over from this other bridge. Maybe some weird game theory or economic attack that's not even in the code happens. How do they test that without it being live in a real environment? I'm just thinking, if they're going to be publishing a version and trying to push it to people, maybe there's a way to allow people to test it live, in production, without it actually being the production version. Is there a way to do this safely?
Robert: Yeah, maybe for economic design or incentive models, it would make more sense. Because then a layperson could potentially find an inside issue.
Alejo: Or even the experts, right? LIke I would invite you, Robert, and everyone else that thinks they can hack it because at the end of the day if you do, there should be a bounty or some white hat hacker reward. And that can be the tangible, real-world value that's attached to it. It's like, OK, if you break it, there's this price. I think Jeff was saying that. I think it’s a good idea.
Robert: Yeah, I think that could work, too. There are a variety of different ways to incentivize communities to look at contracts or designs.
Alejo: I just wanted to say that we’ve hit the hour mark, so if you need to go, Robert, feel free. I’m staying on a little longer and you’re welcome to as well. Jeff, you're welcome to stay as well. Let’s bring up another guest, Rasheed.
His questions were kind of hard to hear, but I think maybe the first one was: are there any real world vulnerabilities that were caught, and then what does this process look like?
I think all of that will come out in the reports, as well as if there are issues, we’ll work closely with Robert to fix them in real time. If there are vulnerabilities, this is exactly why we work with someone like Robert to help us find them and fix them. We publish a public report that anybody can go out and read, so they can feel safe using the products. It's part of the process of launching these things, especially in an entirely new ecosystem. You have to be very careful. We’re also being redundant with auditors, so we’re having multiple audits. Sorry Robert, hopefully it’s not cheating. But you’d probably agree that it’s good that we have redundancy.
When we launch, we're going to be ready by day one thanks to Robert helping us do these audits. We're also going to be doing a lot of penetration testing on the wallet as we discussed, that's going to be very important. Pretty soon you'll be able to use these products on mainnet when Aptos goes live. Everything is live on the devnet right now. Check out liquidswap.com and test it out.
I’m kind of curious to hear your thoughts, Robert, on how community can tie into security.
Robert: I think user retention is like one of the most sticky things. I think a lot for dApps, but especially for wallets, you essentially monetize the users’ landing pages. That’s why Metamask can charge relatively high fees and make a bunch of money that way. I think for security, we don’t directly work with the community, but I think we work with protocols which care a lot about their community and we try to be as active in the community as possible, which is why we're on this Twitter space, for example. It's good to hear feedback from the protocols, but also the users and try to understand how we, as an audit firm, can best respond to the users. Because in the end, we all serve the users and we’re trying to do our best to make this place better for everyone.
Alejo: Yeah. And if we feel safe, I think people will want to stick around. That's probably the number one priority, right? Just to make people feel safe.
Robert: Exactly, I think there's been a lot of negative attention in crypto lately with all the security issues. But yeah, one thing that we care about is making sure that hopefully the protocols that we work with never have to deal with them. That’s the goal.
I think that’s it from my side. This has been really fun and thanks for having me on. It’s great chatting about security with everyone.
Alejo: Thank you for coming on and we’ll have you back on soon. We’ll also bring on more guests. Thank you all for asking questions and we’ll see you next week.