If the mainnet launch will be delayed (I hope it won’t), then you should leave the snapshot date as it is on 14th January 2021. Because I think for investors, exchanges that is the most important date and that date should not be changed.
Let’s handle this conversation once we know if it is necessary or not. There are strong opinions on both sides of it and I think IF we end up in that situation a PoI vote is the sensible way to decide it.
First of all, thanks for such a detailed report. We appreciate your communication, and I believe that the team is doing their best.
However, one point remains unclear for me. You wrote:
The specifications were to meet 100tps, which it passed.
So technically the testnet works as it should work, isn’t it? This looks a little bit strange that we’re going to postpone the release which, hmm, works as intended. You may notice, that some explanation of the possible postponement was given in your message:
It requires thought and consideration to ensure it can defend from a DoS type issue if the tps were to spike.
But what’s the point of getting 130 tps to defend from DoS? Why it is so important to reach precisely 130 tps to defend from DoS? And does it mean, that on 100 tps the network is unstable and has a high risk of DoS? I really don’t understand why 130 tps is much safer than 100, can you explain it?
For people, who are not so involved in the development, the whole situation looks like an example of overperfectionism. Maybe I’m wrong, so I’d like to see your opinion on these details.
Thanks for the answer.
If you don’t mind, please let me know the specifications clearly.
I would like you to determine the product specification.
1.TPS performance specification is 100? I want to know the final specification.
2. What is the performance specification including Aggregate TX? Currently, Testnet can include up to 100Tx.
3. Is the block generation speed 30 seconds? Currently, it is 30 seconds in our testnet.
If undecided, when will this be determined?
I also think it’s important to make the specifications clear and to publicize them so that everyone can understand them.
It may have already been documented somewhere…
I would like to know if there is a specification for chain protection if there is more Tx than can be handled. For example, like bitcoin, it could be pooled somewhere and the creators of the block (usually in order of highest fees) could incorporate the Tx into the block.
Translated with www.DeepL.com/Translator (free version)
You can reschedule the Symbol launch date. But the date of the snapshot cannot be postponed, otherwise it will be a big failure in front of the crypto community and exchanges.
Thanks @GodTanu answers below:
1.TPS performance specification is 100? I want to know the final specification.
Correct, 100 TPS
- What is the performance specification including Aggregate TX? Currently, Testnet can include up to 100Tx.
100, same as Testnet
- Is the block generation speed 30 seconds? Currently, it is 30 seconds in our testnet.
30 sec, same as Testnet.
Answered above: 500 Nodes + Performance Test
A couple of points, hopefully clears it up.
So technically the testnet works as it should work, isn’t it? This looks a little bit strange that we’re going to postpone the release which, hmm, works as intended
Yes it achieves the TPS required, however, it did not handle the situation elegantly when that was exceeded (hence why Testnet is running strangely just now) which needs to be investigated before assuming it “works as intended”
But what’s the point of getting 130 tps to defend from DoS
130 in this context is basically arbitrary (it could be 101, 110, 500 etc). I probably could have better would be better said that when it exceeds the target TPS, performance/effectiveness/processing should degrade predictably/gracefully
Rather than over perfectionism, it is more a case of ensuring Mainnet can cope with a scenario that exceeds the target transactions per second, by appropriately managing the unconfirmed Tx cache, it is not to say it will serve 130TPS, just that it is will handle the situation robustly and reliably.
I would like to know if there is a specification for chain protection if there is more Tx than can be handled. For example, like bitcoin, it could be pooled somewhere and the creators of the block (usually in order of highest fees) could incorporate the Tx into the block
There is - it is the unconfirmed transaction cache, however as per Jaguar’s tweet, an issue was identified with it in the stress test which is being investigated.
I am watching the situation with the launch of Symbol and this is very similar to intentional manipulation. You inform community about risk of another launch postponing, referring to the fact that the testnet starts to behave incorrectly when the number of transactions per second exceeds the limits you specified.
You yourself wrote that the test network works exactly as planned, but for some reason you start to dramatize and come up with scenarios and non-existent ideals that are one today and another tomorrow.
Why don’t you want to follow the example of business giants like Mercedes-Benz, VW, Toyota, Microsoft, Apple, Sony? All of them release their products to the market in a far from perfect condition, and then over time they refine and sell them as an already improved product, and this business model has been working for many years.
And what does the NEM team do? For years they has been trying to achieve the ideal that is the reason for the postponement of the launch Catapult/Symbol.
With all my love for NEM, I sincerely do not understand why you are not focusing on the strengths that the project has and working correctly, but on scenarios that may never arise and against this background you scare the community with a possible next postponement. XEM had not yet had time to properly rise in price, when after this scary message XEM lost 40% in price. Or maybe it’s just convenient for you to be in a continuous search for the ideal and Symbol launching is not the goal?
Have you thought about updating the product through a phased hard fork?
For some reason you guys seem to be obsessed with making and delivering every detail in the early stages of a product.
In Ethereum, the value increases despite postponement after postponement. This is because it has an excellent method of creating expectations for the future.
Thanks for the answer!
A launch in this state will DOS kill Symbol in a minute. The whole network falls apart when pushed slightly beyond it’s specifications. This must be resolved. A Blockchain product must be perfect, at least in it’s fundamentals.
As I see from the last Dave’s answer and jag’s tweet, the problem is not about speed. The problem is that nodes crash when the testnet goes beyond its bandwith, which is about 100 tps.
As far as I understand, in this case DoS attacks will be fatal for the whole Symbol network. Instead of accumulating unconfirmed transactions, all the nodes will just crash. This looks like a critical issue.
I would like a finished product. I like the anticipation of problems and the potential solution to them before they occur. I respect some of the people that liked your post but there is no way around it.
Are you talking about security?! What, then, did the guys from Trail of Bits do from June 2020 till December 2020? That’s right - security Audit. Do you understand what it means SECURITY AUDIT ? And Symbol has security Audit completed. Trail of Bits is among the most highly rated auditing firms in the industry. It turns out the guys from Trail ob Bits are undeservedly eating their bread?
The difference is that there are people like you who are ready to wait for finished product for 5 or 10 years and enjoy waiting, but there is another category of people for whom it is important that the words and promises given in this case from NEM, do not diverge from reality.
i would like to wait 6 more months.
Well, I’m talking about the stability of the platform, and that a serious attack vector has been identified. This requires proper handling. Instead of complaining about unfulfilled promises (that have not been given anyway) one should be thankful that these errors are found by engaged people before launch. This is exactly what gives confidence and trust. I understand your frustration but com’on, there is a temendous amout of great stuff archieved by few very passionate people. Pushing again and again does not help, you may either contribute to a solution or have a bit of patience, please.
Regarding the security audit, afik they audited the sources and delivered valuable input for a secure platform. But as you see, it requires more than just an external company which gives a stamp. We have people here that actually care about their work. A welcome exception these days.