Archived Posts from this Category
Archived Posts from this Category
Seems that 8192 is a good maximum for region size. Both the viewer and the simulator agree.
To that end, I added Constants.MaximumRegionSize and have RegionInfo enforcing same.
Having a maximum region size is also good for searching for neighbor regions as this limits the search area. This constant is thus used in the ‘find neighboring region’ logic as well as the ‘find region containing point’ logic.
For the moment, this is in the varregion branch of the OpenSimulator source repository.
Most of the ‘move to new region’ code is based on checking boundaries. There is much code related to computing if an object or avatar has crossed a region boundary and then computing the address of the next region from same. Introducing variable sized regions messes a lot of this computation up. That is, the code doing the arithmetic usually assumes it knows the address of the next region based on a known region size and them can compute the location of the next region based on that. With varregions those assumptions no longer hold. Varregion implementation means that the computation of region base locations and border locations moves to the GridService who is the entity who really knows the size of all the regions and what is adjacent to what.
The realization that location to region computation is really a GridService operation lead me to totally rip apart the grid boundary checking code and replace it with two functions: Scene.PositionIsInCurrentRegion(Vector3 pos) and then IGridService.GetRegionContainingWorldLocation(double X, double Y). The former function tests to see if the object/avatar has moved out of the current region and the latter gets the region moved into.
A side note is the computation of positions. A region has a base X,Y that is measured in meters and in “region units” (the number of 256 regions to this point). For instance, the region location of (1000,1000) is in ‘region units’ which is the number of 256 meter regions to the location. The “world coordinate” of the region is (256000, 256000) — the meters from zero. These meter measurements are usually passed around at ‘int’s or ‘uint’s. An object/avatar within a region has a relative position — relative to the base of the region. This relative location is usually a ‘float’. So an object would be at (23.234, 44.768) withing a region. An object’s world location, though, must be a ‘double’ since a C# float has only 6 or 7 significant digits. An object’s relative location (float) plus a region’s base (uint) are combined into a world coordinate (double) that can be used to find the region that includes that point.
One major problem is passing the terrain data from the region to the protocol stack. The existing implementation passed an array of floats that were presumed to be a 256×256 array of region terrain heights. The TerrainChannelclass is an attempt to hide the terrain implementation from TerrainModule. TerrainChannel can’t be passed into the protocol
stack (LLClientView) because TerrainChannel is defined as part of OpenSim.Region.Framework which is not visible to the protocol code.
My solution is to create the TerrainData class in OpenSim.Framework. TerrainData just wraps the data structure for the terrain and additionally has the attributes giving X and Y size.
I didn’t want to change the signature of IClientAPI since so many external modules rely on it. It should be changed to pass TerrainData rather than a float. I decided to not change IClientAPI but rather have LLClientView ignore the passed array and instead reach back into the associated scene and fetch the TerrainData instance.
At the moment, all of these changes are in the varregion branch of the OpenSimulator repository.
This port will use Aurora’s protocol extensions so the existing Firestorm and Singularity Aurora support will now work for OpenSimulator. The larger regions size will be restricted to multiples of 256 meters and adjacent regions (the ability to have other regions spacially next to larger regions) will not be implemented and will not work. Additionally, the larger regions must be square. This latter restriction is because the viewers currently (20131101) only use the X size dimension for both X and Y size. These restrictions are enforced by code in RegionInfo.cs which truncates and rounds values and output warning log messages.
The size is be specified in the Region.ini file:
[MyRegionName] RegionUUID = 95ec77ec-58c5-4ce2-9ff3-b6d1900d78a2 Location = 1000,1000 SizeX = 1024 SizeY = 1024 InternalAddress = 0.0.0.0 InternalPort = 9200 AllowAlternatePorts = False ExternalHostName = SYSTEMIP
If size is not specified, it will, of course, default to the legacy size of 256.
Since this will be a major change to OpenSimulator that touches a lot of different parts, subsequent posts, will discuss the changes I’m making.
I’ve been working on LookingGlass which is a stand-alone, you-have-to-download-it viewer for the OpenSimulator virtual worlds. I really wish one didn’t have to download the viewer. Viewing should just happen as part of the web.
I’ve looked at this several times and it looks like the infrastructure is maturing. WebGL is appearing or will soon appear in most browsers. This provides the basis for getting accelerated, 3D graphics on the screen. Additionally, it is standard in the browsers (IE is a problem, Microsoft being who they are, but Google has a plugin to fix that).
The new O3D is early in its development but it is coming. 3D on the web is just around the corner.
I have some ideas for data coding (more on that in a later post) but it required me to do some heavy weight math. It has been a long time since I did math so I needed some refreshers. One amazing resource is the online MIT classes at MIT Open Course Ware. They have video lectures, class notes and exercises for freshman to graduate.
I’m making my way through freshman calculus. Have to begin somewhere and I’ve forgotten a lot. It’s like a foreign language — if you don’t use it, you loose it.
The updates for LookingGlass have been happening over at the official web site.
Camera and avatar movement is MUCH smoother because of finding bugs and adding position and rotation animation code. That also means that movement of animated objects is now smooth. LG works great in OSGrid/Lbsa Plaza but larger sims like OSGrid/Wright Plaza stretch the bounds of memory. SkyX now works with OpenGL although the shaders aren’t all together for the sun and moon lighting. Avatars are still stuck in T-pose and attachments are at the feet. Sitting avatars now are near the sit location and not at the sim’s 0,0. Also upgraded to the latest libomv. A new Windows 32-bit installation package is in the download section.
I broke a lot of stuff getting to a better place. The loading and reloading experience was poor (slow and jerky) so I moved a bunch of the update code from C# to C++. That has necessitated (after a while finding that this was the problem) building and including [http://boost.org Boost] for data structure access locking. I also wanted to have the Radegast window navigatable via the keyboard. This meant reorganizing the input code to extract common routines. Adding avatars meant rethinking and redoing the organization of Entities to allow for extensions for attachements, etc. Avatars are not first class items but merely a specialization of an entity which the viewer has to figure out how to display. The short of all this is I ripped a lot of things apart and they are not all together yet. Hopefully only another week. Fingers crossed.
To make for open discussions and transparency in my design directions, I will be added design comments to my blog here as well as in the LookingGlass forum on the Forge. Join in the discussions on the forge.
I had been fretting about whether avatars and their attachements are first class objects or subclasses of entities. I am now thinking the latter. That is, World just deals with Regions that contain Entities and the Entities have sub-classes that are created by Comm and sensed by the Viewer.
The pattern I’m working with at the moment is (using an Avatar as an example) for there to be a World.IEntityAvatar which defines the operations beyond the base entity. For LLLP, there is a World.LL.LLEntityAvatar that implements the IEntityAvatar and IEntity for an LLLP avatar. Additionally, LLEntityAvatar does a
to add the interface to the IEntity. Later, the Viewer will do a
to see if the underlying entity is an avatar or not. If it is, it will call the avatar specific methods to view the avatar.
Over time, methods will be added to IEntityAvatar and LLEntityAvatar to try and hide the implementation details of an avatar and create an abstract interface for a boned mesh with animations. I figure that the specific positioning logic that is now in RendererOgreLL will move into LLEntityPhysical with a generalized interface for RendererOgreLL to call into the entity to compute the location and parent.
The goal would be to extend Viewer to handler IEntityPhysical (a regular object in the view), IEntityFoliage, IEntityAvatar and IEntityAttachment and all the protocol and item type specific logic will move into protocol specific implementation of classes with those interfaces.
I am about to make a video of logging into Wright Plaza and flying around but I wanted to get shadows working first. I introduced an oddity whereby the brightness of a prim is tied to it’s x or y rotation. I just can’t find the problem but, since it makes most buildings have large black sections, I have to find the bug before I show LG off. Other than that, LG allows logging into and navigating most sims.
I’ve wanted to advance a version but the main thing keeping LookingGlass from being usable was the unbelievable memory footprint. Ogre vertices took up all the memory until it crashed. So, I’ve spent the last two weeks adding a dynamic loading and unloading system. Now I can log into Wright Plaza and walk an fly around. Woot!! I’ll add a video and some updates this weekend.
I’ve completed sculpties and added many texture processing improvements. Normal LLLP textures are JPEG2000 but I had to figure out how to store sculptie textures as PNGs so I could read them into the .NET/Mono graphics functions. Once I was more familiar with textures, I put in checks for transparency and put in the code to pass that info down into the renderer. There are some problems around PrimMesher but I don’t know if it’s a bug or me using it wrong. You’ll see the problems with box prims having the wrong textures on the bottom (most notable if someone rotated the box to use the hollow for a hole) and that textures are placed on scuplties upside down.
But it’s getting more stable and can pretty much do a sim. Walking around is not very easy but that will take prioritization of the work queues.
I updated my page on dynamically loading meshes, materials and textures in Ogre with information for textures. Turns out that the requests for the textures works out but that, again, the containing mesh has to be reloaded to get the texture to pop up in the scene. The code for finding which meshes need reloading is included.
I reduced the detail of Meshermizer from High to Low. The effect is not visible (at least so far) but the number of vertices generated per mesh is much less. After setting ‘MultipleSims’ to one, I was able to login and render the whole of OSGrid/Wright Plaza. Woot!!
Performance is no where near good enough yet. The callback for materials is taking forever since there are > 30k materials in Wright Plaza. Ogre also freezes up now and then. I added/completed statistics gathering and RESTing to try and find it. At the moment, it looks like Ogre hangs up doing the JPEG2000 decompression for the mipmaps. Might have to convert the textures on reception.
I spent a few days trying to get Ogre scaling of prim working. Decided to bag it for the moment. I thought that letting Ogre scale the prims would make most of the prims just unit boxes and these I could share vertices of thus reducing the memory problem. Well, it seems that entity scaling affects the coordinates of the entity also so linked sets (prims relative to their parent) would be messed up. Even after fixing that I discovered that the scaling messed up the texture mapping (which I am using Ogre for also) so I would have to scale the texture depending on the scaling of the face the texture was on. Ick.
Anyway, backed out the scaling code, found some bugs (one in the work queue that would ironically make more work) and the arrow keys now move t he agent avatar although the camera doesn’t move yet.
This weekend’s time spent programming (as opposed to pulling weeds and cleaning the garage) was changing the scaling of prims to use the Ogre scaling factors rather than having Meshmerizer scale the vertices information. This will allow me to share vertices information within cube prims.
Ogre doesn’t allow one to share vertices between prims (well, not really true and I will look into creating my own vertices class but that’s a ways down the road) but it does have a feature to share the same vertices between all of the sub-meshes within a mesh. That means I can share the vertices between all the sides of the ubiquitous cube. This should divide the memory requirements by nearly 6. Fingers crossed.
I’ve been tweaking the LookingGlass web site at http://lookingglassviewer.org/. Just another thing to add to the project — learning MediaWiki. But the seem is new so it doesn’t look like the default MediaWiki page. More formatting is needed but that will come with time.
I will be updating my progress programming here and on that site. Manual cut-and-paste at the moment. That is another thing for automation.
I am working on an OpenSim/SecondLife compatable viewer. I almost have it to a state that I am going to let the BSD Licensed code loose. I want to get it to the point of being buildable and runnable and not totally sucky before I put the code out. That’s not to say it will be anywhere usable or feature complete at that point.
The biggest problem at the moment is performance. I have clearly made some of the noob mistakes when creating a visual application. It disparately needs a manager and scheduler for all of the work queues. The current implementation gets totally overwhelmed when entering a large sim (especially an OpenSim sim since the whole sim contents is thrown at the viewer).
Here are some progress pictures. These are of the Portland Connection sim in SecondLife(r).
I listen to the WNYC Radio Lab podcast and last week’s was about the musician Juana Molina. She creates music by using a looping machine to add to her presentation: she starts singing and plays her guitar, loops portions of it and has it play back while she sings over it. Layer upon layer is added all in real time until a complex chorus of voices and sounds creates a song.
Back in August, they podcasted about Zoe Keating who does the same thing with a cello: in a real time performance, she plays phrases and layers them into an accompaniment and creates an overall complex and full sound.
This all made me think about how the Internet is new, personal technologies are expanding what an individual can do. I remember talk about how, since tech is getting so complex, only corporation could do innovation — the garage invention is dead. But now we have people blogging and magnified without the need of a newspaper, we have musicians who can make an orchestra without the symphony and we have directors creating movies without the film studio.
Real innovation still happens at the “bottom” — one person with an idea. And that one person, with all our new technologies at are available to everyone, can still make a splash. There are two messages there: individual people free to create and technology in the hands of everyone.