eMuine

Some new news on  the development of eMunie (this was posted by the developer Dan):

Many of you here know that quite some time ago, I developed the idea of what we call Block Trees, and we've been running on a Block Tree model since July of 2013. Using a tree instead of a chain has many benefits, but the most significant one being scalability, and as we've seen in many of our beta tests, even a tiny network can support transaction loads in excess of 100 transactions per second.

Unfortunately, I never published a paper to claim the idea, and I was pretty cavalier with the details on how it worked, so of course since then, other "developers" have taken large chunks of my ideas and passed them off as their own. I guess who can blame them, trees are pretty awesome but a mention would of been nice (naive, look at the egos in the community!).

Still, we are the only project to actually successfully develop and run a tree, with many of these other "developers" still working out the details I didn't furnish the world with…aaaaaand…we're throwing it away because its obsolete!

Over the past 6 months I've been working on a side idea to improve on the Block Tree concept, and I'm pleased to report that over the past 2 days we have ran a founder test with the first version of this ultimate goal. The initial test was of course not without its glitches (the main being a bad on my part) but we have successfully ran a "Blockless Tree" up to 120k transactions in that time.

The next step is to replace the all powerful, all knowing single tree with what I can only describe right now as relational ledgers.

So what exactly is this new black magic and what are the benefits of it anyway?

Not wanting to make the same mistakes as before, details are going to be sparse until Ive got the paper written. Jazzer is coming to stay with me for a week over the Xmas period as he's had more experience than me at putting papers together, so we will thrash one out then.

Before then, simply, the new upcoming ledger model allows us the following main features:

    Un-bounded transaction volume. Multiple 100,000's per second globally, the larger the network, the more transactions we can process
    Faster (like 15 sec wasn't fast enough) transaction confirmation times.
    "Instant on" sync for light clients (mobiles) without a need to use 3rd party wallet services…EVER!
    Redundant & distributed ledger, service providers can hold a portion of the information and still operate 100%
    More efficient use of storage and bandwidth
    Historic data can be easily pruned by nodes not wishing to store it
    Allows storage of large amounts of data within transactions (invoices, images, scripts, video…) without forcing non-recipient clients to download it

There are many other smaller benefits, but those will be detailed as and when the time comes.

Final implementation should be completed within a couple of weeks, Stage 1 of the conversion is almost completed and I'll be rolling out a Beta during the next week with a first test of this.

This new technology puts us a light year distance ahead of other projects in development presently. Everyone is still on clunky Block Chains, trying to move to Block Trees, which we are throwing out for "Relational Channeled Transaction Ledgers" (yes I know, needs a cooler name!) :slight_smile:

That sounds very interesting. And that could be the solution to process as many transactions as VISA/MasterCard do.

But since I don't know any details, I don't know if that really works and if there are any disadvantages.

An update from emunie if anyone cares:

"Hey everyone,

Thought it was about time I did a bit of an update as the chat is still sitting on the to do and its been a few weeks.

I've got a busy couple of weeks coming up pre-xmas, all exciting stuff I assure you, but first let me give some details on what has been happening with the client.

Conversion to Channeled Ledgers

Most of you guys are aware that a couple of weeks back I announced a new transaction model that was going to be implemented. I am performing this work in steps, as it's much easier than trying to throw it all in at once and consists of 3 different stages overall.

Stage 1 is completed and involved removing blocks from the current transaction tree implementation. This has been through a couple of small testing rounds and proved successful. Dataset sizes are reduced by around 20% due to the removal of blocks and the improvements that can be made to transaction data without them, additionally we should see a performance increase too.

Stage 2 involves finalizing transaction behaviors, and implementing any missing components that will be required for the final stage. Additionally there are a lot of peripheral functions that need to be modified, added, refactored before I can implement the final Stage 3 meat. I'm also taking time to cut out redundant code, tighten up the codebase in general and squish a load of various bugs while am at it. Stage 2 work is long and grueling, its all about the prep, so a day or 2 breaks on less intense stuff is nice! :slight_smile: This stage is where I'm currently am at and coming to the end of.

Stage 3 is the birth of the beast. Mainly involves removing the current single tree ledger model and replacing it with the channeled ledger model. Sounds simple, but in practice its certainly not, as the global ledger is composed of many sub-ledgers, which can all run within their own space and all of which can be distributed around the network with varying levels of redundancy. Very few nodes will hold the full channeled ledger, with most only holding what they need for their wallet. I hope to have Stage 3 completed before the end of Dec.

Other Client Work of Note

If you know how I work, then you know (aside from periods of hermit time) I never work on one thing at a time. Same is true here, aside from the above, I've been working on some other fairly major components and changes.

Perhaps the most major of which is switching out the Profile, ENS & Ratings stores from a local repository to a remote one.

Previously this data was downloaded and synced by all clients, and then referenced locally when required, which while convenient, didn't lend itself to a nice user experience for wallet only nodes and increased the likely hood of issues/corruption with that data. I've been unhappy with this approach for sometime, but continually put it off as it worked, and there were bigger jobs to do. Now that I'm refining a lot of components for Stage 3 of the transaction conversion, it made sense to do it.

This data is now served by dedicated nodes which have selected to perform that role, as you would with a Hatcher, ERC server and the other system services. These service nodes sync between themselves, process and validate new data, and then deliver that data to requesting nodes.

Admittedly the sync and storage mechanisms I've implemented are a little crude and don't lend themselves to syncing large data sets of 10-20M+ records in the most efficient way. However, assuming a high rate of user adoption of these satellite services of 20%, we won't have to revisit the way this is handled until we have 100M+ of active users, and even then "the files don't change, just the filing cabinets they are in". Thus I decided that for now, what we have is more than sufficient for the medium term.

Numerous other little "fun" changes too, mainly to the GUI and getting that more responsive and streamlined, overhauled chat ready for implementing here, emoticon engine for text strings, & started a micro-message engine for dealing with content comments (marketplace questions, feedback for example) .

Busy times!"

sounds pretty interesting.  sounds like he is going to have lots of side chains and no main chain. 


sounds pretty interesting.  sounds like he is going to have lots of side chains and no main chain.
Someone asked the following:

"and what makes the trees become a forest? there must be something like cross-tree transactions otherwise we had separate currencies in every tree."

And Dans reply:

"Not going get into that too much here before the paper is out.

But there are many trees that can cross transact with each other, new trees are created on new party -> party transactions.

So lets say that A transacts with B....a tree is created for transactions between A -> B and that tree is a "channel". A then transacts with C, and a new tree is created for transactions between A -> C. If in the future A & B or A & C transact again (in either direction), those transactions are placed in the relevant channel tree.

A single tree is already better than a chain, but having multiple smaller trees that can co-exist together in a distributed manner, allows all of the huge benefits I've already outlined such as crazy performance and extremely fast sync times. "

Question:
"Hmm... that means the number of trees increases quadratically with the number of users. Does it scale?"

Answer:
"It sure does, the amount of data present in the Forest is the same as there would be in a single all knowing tree.

Channels aren't directly tracked or managed, so there is no cost associated with quantity."

Gonna be a big performance hit to sync all that.


Gonna be a big performance hit to sync all that.


It sounds neat and all, but I don't see how he is going to pull it off.  It is like having a decentralized program being supported by a decentralized network. 

I am all for the program being centralized in a blockchain so that it is clean and clear.  The point is that somewhere we need decentralization in the mix that acts as a check against abuse. 

Gonna be a big performance hit to sync all that.
Why is that?

and more…
[QUOTE="Fuserleer, post: 22475, member: 4"]
Post IPO's And The Real V1.0
As mentioned previously, I'd still like to see a V1.0 client released with the full vision and set of features that were originally intended, along with some additional maneuvers should time prevail which I'll get to shortly.
Realistically (taking into consideration DAN TIME), a full V1.0 should be possible within 4-5 months from now.  The majority of work is mainly the DEX, with the other 2 components requiring perhaps 1-2 months work between them.  This puts a full launch in the same time frame as our major competitors, (July-Aug) even in the event of releasing a lean V1.0, the timescales will sit pretty much the same as the missing components would be completed around the same time.
Developing the transaction forest has taken a large amount of time, around 3.5 months, which is over my 2-3 month estimate back in December.  It's certainly been worth doing it as it solves many of the issues others will face and puts us at the frontier and beyond.
Additionally this work allows us to have real mobile clients, with no 3rd party service connectivity required, and it is light enough that these devices can also provide services and hatch transactions in future releases.  A barebones mobile client providing transaction and messaging is one of the additional cherries I would like to have ready to go with a full V1.0 client, but this depends on a strong IPO and being able to find a good, competent team to develop it.
The IPO will distract a little from development, but as it's a necessity for any "doomsday" scenario, we can't really do much about it, plus we would have to do one anyway before a full V1.0 launch.  Additionally a strong IPO will allow us to investigate original promotional ideas to gain some traction into mass market, and allow the development of side projects such as the mobile client and also look into merchant payment terminal integrations.
Finally, the great closed V's open source debate.  As always stated since day 0, eMunie will remain closed source until such a time where cloning of eMunie by pump & dump developers will not hurt us, this hasn't changed and will not change.
------
So there you have it, that's the plan for 2015!  Hopefully you'll all find it pretty solid, it's been given a lot of thought both by myself and others and is the best approach to mitigate any unexpected events and still get us to where we want to be.
As always, thanks for your support!
-Dan[/QUOTE]