Jump to content

MattScott

CEO
  • Content Count

    1281
  • Joined

  • Last visited

Everything posted by MattScott

  1. Yep. Missed it. We'll get that pulled down too. Anyone who previously purchased it can keep them. We aren't removing them from inventory - just not selling them any more.
  2. For the OP, Our predecessors made the Trump mask, and after some analysis I made the call to remove it a while back. This was not done for political reasons. Like many other mafia style games, APB is set in an alternate reality where San Paro represents a hybrid of cities in the US. While the world of APB does have politicians, none of them are real figures like Trump. Thanks, Matt
  3. Hi everyone, I knew aiming for the Open Beta this weekend was aggressive, and unfortunately we have hit a minor setback yesterday. As we were testing the new build, we found that new players joining the Open Beta might crash if they have any of the new Joker Store items. This is because we haven't migrated content from 1.20 into 2.1 since Open Beta #2. That process is now started, but it means we won't be able to run the test this weekend. I'll keep you posted. Thanks, Matt
  4. Hi everyone, Small Monday update: We haven't done extensive testing, but here are the early results comparing a build with AVX support to a build without AVX. This was done on a fairly high end PC. The first test was Financial: The second test was Waterfront: NOTE: There is a margin of variance for running these kinds of tests based on pedestrians and cars that go by which means essentially the results are the same. This is just one of the cases where APB does the opposite of what you would expect. In light of this, we are dropping the AVX requirement moving forward. Thanks, Matt
  5. Hi everyone, I'll start with the good news: We're ready for Open Beta #3 and this time we will have all districts online for testing. PROGRESS WITH THE ENGINE UPDATE: The team is pushing hard to do the Open Beta this next weekend so we don't interfere with the Halloween event. There is only 1 blocker left, and it's a last minute item I wanted to try for the community: Removing AVX During development of APB 2.1, we decided to require support for AVX. After the initial Open Beta, we learned that this decision prevented a significant number of players from joining. So as an experiment I am having the team remove AVX from the client to make sure everyone can still play once we migrate over. It's about 3 days of work to recompile all of the supporting libraries without AVX, check them all in, and then recompile a new 2.1 build without AVX. If there is little to no difference, then we'll test without it. But if there is a big enough impact, then we'll have to move forward with it in. Assuming that work gets done by EOD Monday, then I would feel comfortable that we have enough QA time to identify and fix any last minute problems. For this test, we are also leaving a series of Advanced multithreading options visible for you guys to play with. None of these options require restarting the game or even getting out of the district. They dynamically enable and disable multithreading features. I would love to collect some feedback on which options worked best overall for your configuration (CPU, GPU, Memory, Hard drive) so we can set those on by default once we roll out. In terms of bugs, the team has mostly cleared the things we knew about: Vivox should now work. Customizations shouldn't look wonky (after making some larger changes, we couldn't reproduce the skinny character issues) Doors shouldn't get out of sync over the network I am hoping we fixed the character baking hitch that was occurring when new players joined the district. This is a very difficult thing to test in a lab, so the engineers will be capturing performance stats during the test. MOVING FORWARD WITH THE TEST: I want to finish this update by talking a little about expectations. This has been an incredible difficult project to finish. Obviously we are massively off the timeline I originally set. There are still bugs, and we are still not done optimizing the renderer, but weighing the pros and cons, I have decided we are hopefully "good enough" to launch. There are simply too many other more important tasks that are blocked by not releasing APB 2.1. I want to get into better district management, better matchmaking, and cross district play between worlds. Post the 2.1 launch, the team is committed to continuing to optimize performance. There are definitely places where the 2.1 FPS significantly under performs Live. We have some larger changes that we would like to make to the renderer, but they will require much more time to engineer. However, we have asked the internal testing group how the game "feels". And for those that have given feedback and been involved in daily testing, they say that 2.1 "feels" better than Live. So this is where I need the community's help. For this test, let's put away the FPS counter. There will be plenty of time for that down the road. Here is what I would like to see from testers for Open Beta #3: First, ahead of the test, get on Live. Try a mission or Fight Club. Try raising or lowering the graphics settings on Live to zero in some of the visual elements that make or break the game for you. Focus on the feel of Live. The hitches. The mouse input latency. etc. Next, get into the Open Beta and try to do the same things in the same districts. Play around with the graphics settings to try and see if you can get to a similar visual quality. Then try to compare the feel of Live versus the feel of 2.1 and give us some feedback. If we can pull off next weekend for Open Beta #3, then we'll get an announcement up in a couple days. WHAT IS LEFT IF THIS OPEN BETA GOES WELL: During the test you'll see that some things are not up to date with Live. So the only major task that we need to complete before launching is to migrate the most recent changes from Live into APB 2.1. (That in addition to any big blocker bugs we encounter with the test) Thanks, Matt
  6. Hi everyone, This week's update is frustrating, because we are so close to new benchmarks. Unfortunately 2 specific areas were a bit more complicated and couldn't be completed on Friday. I may post mid next week once I get a chance to see where we are at. Thanks, Matt
  7. Hi everyone, I'm long overdue for a more comprehensive update on the Engine Upgrade, so here is a recap of the last couple of weeks. Back on September 6th, I outlined a list of things we were working on. Here is where we stand with those items. Progress with the Engine Upgrade: There is still a lot of thread safety work left to do, but this was a big source of the errors. This work has been completed. We *believe* we have re-architected the mulithreaded code so that all the previous errors have been eliminated. There is still one issue related to Occlusion Query multithreading on RTX cards that causes a bizarre slowdown, so we have that code disabled till we can fix it. We want to allow secondary render threads to use the cache and then track down the specific case where we are setting the per material/mesh state and making sure those invalidate the cache properly so everything gets updated. This work has also been completed. I'm pretty excited about how we fixed this, because after we started testing the new code, we found another issue where the game was destroying the shader cache multiple times per frame. That has also been addressed. We are investigating an issue where specific items are forced to update every frame when they shouldn't. This last item is fixed. We started test driving the new code this week, and while there is some improvement, we're still not seeing anywhere near the kind of performance gains we were expecting. SIDE NOTE: The team has affectionately adopted the phrase "As expected, APB code does the unexpected". For the later half of this week, the team went back and refreshed some older code that integrates with 3rd party tools that allow us to better analyze multithreaded performance. In the past, we have been relying on Visual Studio and 3rd party tools that were working when we took over. These existing tools measure where the most CPU time is being spent. Overall, that technique has been invaluable in finding brute force code doing large tasks in a single loop that could benefit from being multithreaded. However, these tools do not measure things on the GPU, the time spent waiting on a lock or when a process is blocked. On Friday, we finished integrating some new tools that enabled us to stage a full test, capture results, and generate more comprehensive reports. As suspected, we found a significantly larger problem related to how the engine renders Meshes overall. To give an example, our previous efforts to multithread code lower down in the pipeline only account for a fraction of the processing time. The higher level system for actually rendering everything that was processed accounts for the majority of processing time - but it's mostly spent waiting or being blocked which is why these areas never showed up in the previous analysis. The overall rendering is effectively drawing elements on only 1 or 2 threads. It's incredibly inefficient, and we already have a good idea on how to fix this. The team will be moving onto this new issue next week. Alongside these 3 areas listed above, the rest of the team has been fixing VOIP, desync, and other more minor areas. We need those out of the way so we can launch once the renderer is finished. As always, I'm providing these updates in real time with the best information I have at hand. I am (again) hopeful we'll see good progress this coming week, and that I can finally get some better mission district benchmarks. The plan is to spin up Open Beta #3 with mission districts as fast as we can after that. NOTE: After I posted this, I thought I would provide some nerdy details. Rendering in a 3D engine mostly happens in 3 stages. Stage 1 processes the scene. This involves lots of complex routines that gather up and process all the things we need to draw. Some of things we do in this stage are: Identify only the things that are in view of the player's camera. This happens in multiple ways: bounding checks on the field of view, eliminating things too far in the distance, testing to see if an object is occluded (covered) by another object, etc. We want to eliminate everything not in view to avoid drawing far too many things. Recalculate lighting receivers on each object that we are going to render. Process objects for extra data that we need to pass into the next 2 stages. Stage 2 actually draws the big list of things to the scene, which also involves its own set of stages to properly sort objects back to front and render transparency correctly. Stage 3 handles post processing of the final frame buffer (effectively a composited image of the scene once the objects are drawn). This handles a variety of effects including shadow rendering, bloom, iris response, motion blur and more. Stage 3 requires a lot of extra information that was captured in Stages 1 and 2. TL;DR: The majority of our multithreaded work has been in Stage 1. Now we've identified big areas in Stage 2 that are affecting the overall speed of rendering a single frame. Thanks, Matt
  8. Hey everyone, My schedule has been incredibly busy - busy enough that I completely missed last week's update and only realized it today. Sorry about that. Part of the problem is that in addition to daytime meetings here in Europe, I am also leading a massive project in the evenings with the US that keeps me up till 2-3am every night. Big deadline today, so that should free up some of my time to resume these updates tomorrow. Apologies, Matt
  9. Hi all, Time zones are messing with my updates. I got a brief update on where the Engine Upgrade was at Friday morning Pacific time. Progress continues to be good. It's a rat's nest of old code, but we've nearly unraveled it all and put most of it back together better. Internally the big test will be this upcoming week, when we can take everything for a test drive. At this point I am hopeful we will have some benchmarks by next week's update. Thanks, Matt
  10. Hi all, I have reviewed this thread and responded here: Thanks, Matt
  11. Hi everyone, Sorry I missed Friday. I'm currently traveling in Europe, and my schedule is a bit off. This week I'm going to talk about progress first, and then I'm going to talk about the overall state of the player base and what the plan is for rebuilding the player base. Progress with the Engine Upgrade: We had another solid week of working on complex code buried deep in the render system. A significant portion of rendering involves caching material shader parameters, and we were able to clean that code up so it is now thread safe. This greatly reduced the amount of locks. Still to do: There is still a lot of thread safety work left to do, but this was a big source of the errors. Still to do: We are investigating an issue where specific items are forced to update every frame when they shouldn't. Still to do: We want to allow secondary render threads to use the cache and then track down the specific case where we are setting the per material/mesh state and making sure those invalidate the cache properly so everything gets updated. At that point, we'll be testing the new changes against our old benchmark to see if we can run the next phase of Open Beta with mission districts. State of the player base: I've seen a couple threads mentioning the decline in players, and I want to take a moment to comment on that. I think there are macro issues related to games in general right now, and there are certainly micro issues related to APB specifically. If you look at the industry as a whole, there are declining CCUs in most games right now. This is because in many places coronavirus restrictions have been lifted so players who are sick of being cooped up all summer are doing something different. On top of that, students are going back to school, which takes away free time for games. I'm not saying every game out there has dropped players, but in general we are seeing that trend right now. For APB, we find ourselves in a weird transitional place. APB 1.20 (Live) has some systemic issues that can create a pretty bad play experience - mostly centering on matchmaking, new player experience, and various forms of griefing/dethreating. We have a lot of things in the works for 2.1 that will address these issues which have either been completed or are nearly done. But work on the engine has taken light years longer than I anticipated. While we have done 2 Open Betas now, I can understand the frustration in waiting for the rest of the districts and the actual launch. Hopefully everyone has seen our hard work on Asylum and Social, and they know we are committed to getting this finished as soon as possible. I believe we have made good use of this delay by overhauling the game's entire monetization system. You'll be seeing more of this in September. And I think once APB 2.1 does launch, new players will find the game much more attractive because of our changes. Rebuilding the player base: With the current state of the game in mind, I have been faced with the pretty tough decision of whether to start spending lots of money now on advertising and promoting 1.20 or continuing to wait for the launch of 2.1. That money has been set aside, but timing is everything. I believe we will only get one shot to bring back players and show them the new APB experience. For that reason, I have intentionally held off promoting 1.20 because that would largely be a short term gain and a long term loss as new players enter 1.20 and then bounce for the same reasons we are upgrading everything. Waiting does have its consequences. Just know that advertising and promoting this game is not a question of IF. It's a question of WHEN. APB 2.1 has a lot of features under the hood that will either address or start to fix the remaining core issues in the game. It has much improved console experiences which at one point during their launch added significant amounts of players, who then left when the performance was so bad. With server cross play, we can bring the entire community together in a much better way. It also has integration with Anzu for in-game ads that will take more pressure off players to spend money. Anzu has also committed to helping us with a large Twitch campaign for awareness at launch. Hang in there. I want to end this week's update with a note to those of you who have continued to support us both in words and with purchases. Thank you. We wouldn't be able to do this without you. Thanks, Matt
  12. Hi everyone, I'm going to sneak this update in under-the-wire for today. I want to start by acknowledging that this part of the Engine work is frustrating. It's noodley and tedious, because we have to run the game over and over under certain conditions to find bottlenecks or crashes. I know sometimes these updates seem endless. I'm sure it appears that I talk about the same stuff or the same goals (like speed optimization) all the time. However, this is necessary to get Action Districts up to speed. I feel very good about this week's progress. We have started work to address the stalls. Over the last two updates, I talked about how much we are throwing at the Graphics card (GPU) and the problems that causes. This week we hand inspected logs for all the calls per frame, and found that a large percentage of them are executing the same command over and over. Each of these instructions is wasteful and costly if we don't need it. So we have started work on a State Caching system that saves values every time we pass them to the GPU across a wide number of different types of variables like Shaders, Samplers (textures), Parameters, and Buffers. Then when we go to run that command again, we check the value against the new value, and if its the same, we can skip that instruction. I am honestly shocked there wasn't a system like this in Unreal already (I am 99% that Unreal 4 has a system like this). Some early work was done on this system, but based on my math, we should see a significant speed improvement. We also finally figured out the source of the D3D crashes. This was super hard to reproduce, but we finally found that deep in the multi-threaded rendering code it was saving copies of how Meshes attach to Shaders. But other threads can actually overwrite the Shader, which then means drawing the Mesh is now invalid (wrong Shader). This sort of code is not "thread safe". We did a temporary fix for the issue, but it involves locking all the threads when a mesh is drawn. That's incredibly bad, because it slows things down. But we now have a benchmark build that doesn't crash. Starting on Monday, we will be re-working all those bad places where values are stored, so we can remove the lock and make drawing Meshes thread safe. Since we have a benchmark, we will see the speed improvements as we go. We added optimizations for 2 types of Shadow rendering. We are focusing on the Central Park area in Financial, which is one of the slowest areas in the game. There are a number of issues that make this area bad, and they are related to how we handle shadows from foliage. These two optimizations went in fairly smoothly, and they are the "pattern" for what we want to do throughout other parts of the code. Early tests indicated a 10% improved FPS. We have other areas that are very similar which we can now easily apply this new pattern to in order to gain more speed. Thanks, Matt
  13. Hi everyone, I'm going to be moving my Engine Updates to Friday, now that I don't post CS stats. That means I don't have to interrupt my weekend to get the the updates out. We got back in on Monday and spent the first part of the week digging through nVidia driver documentation and then trying various code paths/options for getting a better sense of how much we can throw at the card before it stalls. It appears the problem is a bit more nuanced that we thought, and may take a bit longer to address. So we're splitting efforts. Part of the team will move onto the other optimizations that we had planned, while another part of the team continues to look at this issue. As of this afternoon, I don't have a status on the stalls. But yesterday, I got into Asylum and Financial to run my own tests, and much to my dismay I found that the game ran slightly slower than before. Classic APB, I thought. After sharing logs and performance data with the team, I was asked what setting did I run the test at... and of course I had forgotten and left everything at Very High expecting the benchmark to line up with our previous Very High settings. I shared last week that we fixed a number of bad issues related to Settings, and that Very High is now not comparable to Maximum. So I went back and tried a test at High, and then at Medium. For the most part, Medium looks as good as Maximum on Live - with one exception related to how far away dynamic lights render. So in theory, the proper benchmark is between High and Medium with some custom values. With that new setup, I was relieved to find that performance had improved for both Asylum and Financial. Once this goes out, I will reiterate - don't jump to Very High and expect it to match Maximum on Live. You guys will need to play with the settings and find what works best for your systems. NOTE: I might add to this later tonight as the devs submit end of week work. Thanks, Matt
  14. Hi everyone, I completely missed an update last weekend. Sorry about that. My schedule is a bit out of control lately. I have been putting off travel all summer, but now I need to head over to Europe at the end of the month (if I am allowed), and it's been a scramble. That also means I won't be available for this weekend either. So today will have to serve as both a late and an early update. After Open Beta #2, we looked at a number of big ticket issues. We sorted out the network latency. So that should be better in the next test. We also looked at the crashes and found a fair amount of unfinished code relating to Settings. That whole section has been cleaned up to prevent crashes and random values from being used in some cases. We also took the opportunity to clean up rendering distances and connect them to gameplay instead of settings, so players should always be on equal footing regardless of graphic quality. Due to things moving around, the 'High' graphical setting is now the equivalent to the 'Maximum' setting on Live. There are a number of bugs that we are setting aside, so we can focusing on more multithreaded work that didn't make the last public build. That work is specific to getting action districts into the Beta. We have completed 3 different pieces of that project related to Occlusion, which is a way to speed up rendering and provide better performance on larger maps (think Asylum versus Waterfront). NOTE: Unfortunately we lost at least a day of work after Microsoft released a bad Visual Studio 2017 update. We wrote some temporary workaround code, but our hope is that Microsoft patches again soon because the workarounds are slightly slower. Initial tests looked very good, but disappointingly other machines ground to a halt with 100% CPU usage across all cores, locking up the machine for seconds at a time. As of just this afternoon, we think we now understand what is happening. It looks like each GPU has different speeds and different limits to how much memory you're allowed to push in some Occlusion calls per frame. If you go over this limit, the driver will basically stall until it's done processing the previous buffer changes to free up more memory. We think that we're now pushing so many occlusion queries (each requiring specific buffers) to the card early in each frame, that the card runs out of buffer space. Yep. You guessed it. It's too fast. Seriously I couldn't make this up if I wanted to. Before the multi-threaded code was slow enough that we likely didn't hit this limit, although I know some players complained of bad stutters which would have been this issue. We need to fix this issue and then we'll get a build up for SPCT to brea- I mean test. Then we'll do another public test. Thanks, Matt
  15. I could answer with "anything is possible", but not all things are feasible. It's much easier to resurface environment materials. A good artist can use Substance or any number of tools to generate unique, but more high fidelity versions of concrete, dirt, or brick. However, it's a much more difficult task to entirely recreate the textures for characters, clothing, and critters. Right now, we've taken the direction to pretty much up-rez, convert to PBR, but leave the original textures intact. You're right that a lot of them are hand drawn in a very crude sort of way. I think it's better for now to have a solid base across everything up and running before we jump in and start creating replacement textures. I found some of this floating around in the code. It looks intriguing, but I want to stick with what was there at the end before trying anything "new" (or going back to something that was discarded).
  16. Hi everyone, I want to post a brief update this week after yesterday's test. To echo what @Sakebee has already said, I want to thank everyone who came out for the Open Beta #2 yesterday. We had ~500 players on at the same time We had a total of just over 2,600 total players who logged in to test Based on all of the hiccups in the first attempt, I view this as a our first real opportunity to get wide spread data and feedback from players. I want to acknowledge that while some players had okay performance, many players had frustrating experiences. Here is a quick recap of what we will be looking at from this test. All players saw stutters when new characters would load into the district. This is something we have been working on, and we're close to solving. All players saw their frame rate drop from more items in the UI. This has been an ongoing effort to optimize, but the UI is much worse with lots of players on. All players saw the status of doors desync with the client, so they appeared closed when actually open. Most players saw pretty bad server lag and some saw warping. This is a bit of a mystery right now. The servers are the same ones in the same locations we use in Live. Most players saw higher input lag. This is a critical item for us to look at more closely. Most players experienced at least 1 crash, which was expected. We have some work on this that didn't make the build. Some players saw issues with their imported characters that made them skinny or changed their face proportions. This is purely a bug and will be addressed. The font is still being swapped out. While you can still see the newer console font in some places, we have already swapped many other places back to the older APB PC font. A number of art issues were reported, which is great. Please be aware we wont be working on those till we fix the big ticket performance items. Controller On the bright side, I had numerous players report a smooth experience with only the stutters related to character loading even if their FPS was slower than Live. Overall, my biggest concern right now is that it looks like the build for Open Beta #2 ran about 15-20% slower than the Open Beta #1 build, and in some cases much slower. I want to make sure we didn't add some bad code at the last minute. It also looks like we had bad "hot spots" that caused large frame drops if you looked in that direction during game play. We're still collecting some more detailed information to narrow down the issue. I got some negative feedback that 3 hours for the test was too short. I will take that into consideration for the next test. We don't want to exclude anyone, but we also didn't know if the Login server fix would work, so we needed a clear endpoint in case things weren't working well. Moving forward, the team will be finish off work that didn't make it into this build. I'm still going to be selective in when we turn on and off the Beta servers, because I don't want to build up testing fatigue like we have seen in the past with Weapon Balance Districts. For each test, we need as many players as possible on at the same time. Lastly, I know not everyone is a fan of Fight Club, but we need to stick with Social and Asylum until those districts run better. Financial and Waterfront are *mostly* done from an art perspective. We're still adjusting lighting, but we know those districts perform worse due to their size. There is no value in putting those online yet. Hopefully, we have finally put to rest the idea that the Engine Upgrade was a myth. I purposefully didn't limit streaming or videos of the event, so those who didn't make it for the test could see the good, bad, and the ugly. I will continue with that policy. At the end of the day, this is a community effort. We need your continued help, so I look forward to seeing the new engine improve over time. Thanks, Matt
  17. Hi all, Today's CS update will be a bit different. First, I want to announce that this will be the last CS update. I started this thread back on June 1st 2018 as a way to provide transparency to the players on CS wait times. Since then I have posted weekly, whether the numbers were good or bad. All along, I have known we were terribly far behind, so my personal goal has always been to get our response times down to reasonable level. During the last 18 months, we've seen the problem skyrocket and then drop and then go back up and start to come down. In my opinion, two problems have contributed to this: The APB support tools are very complex and require lots of training before a CS agent can efficiently deal with tickets. The game had a bad history of copy/paste responses that sometimes weren't related to the ticket, so I set the lofty (and somewhat naive goal) of not using copy/paste responses. I simply underestimated the head count needed in CS. Once we got behind, we couldn't hire fast enough to dig our way out. To be clear, Customer Support is nearly always a thankless job. If we do well, the player moves on. If we don't solve their problem, then my staff take horrible amounts of abuse. Trust me, the CS staff are regular people. They are here because they like games, they want to keep APB fun, and they want to help. We will never be perfect, but we definitely put in lots of time to try and get things right. Second, for the first time since we took over, we are at 0 new tickets. Obviously that will change in 1 minute when a new ticket comes through. But right now as I write this, every ticket in our system has been responded to, and most are waiting for players to provide more feedback. Right now we have a total of 137 tickets, and the oldest is from June 17th. I appreciate everyone's patience, and I'm sorry it took so long. Hopefully we'll be able to keep up from here on out. Thanks, Matt
  18. Hi everyone, Lots of work this last week. We'll be making an announcement soon. Stay tuned. Thanks, Matt EDIT: The 2nd Open Beta has been announced for August 1st. https://www.gamersfirst.com/apb/news/2020/7/27/second-open-beta
  19. Hi all, I am very pleased with the CS team. Our 3 new agents alongside everyone else have dug in and finally gotten our tickets under control. We still have a ways to go, but I'm feeling a lot better. As of today we have: 85 new tickets (a massive drop from last week) 245 total tickets (another big drop from last week) We are responding to tickets submitted on 7/14 which brings the wait time to down to 10 days. Thanks, Matt
  20. Hi everyone, It's been a while since my last update, and several members of the Fallen Earth community have asked for more frequent posts. The team at Little Orbit is very busy with several high priority tasks right now, and my schedule is the worst out of everyone. Unfortunately that means I can't commit to monthly updates right now, but I will do my best to post more often. I do need to reiterate that out of the 3 projects going on in the studio, Fallen Earth is currently the lowest priority. I expect that to change as we get closer to the end of September or early October. I know that's not a fun thing to hear, but I want to set expectations correctly. Work on the project is continuing on a number of different fronts. In May, I laid out the short term road map of areas we are working on which includes finalizing terrain, getting characters up and running, and implementing basic movement. There is a good amount of work in progress on those items, but much of it is behind the scenes which means I don't have good visuals for them. One of the biggest setbacks, is that we found a number of places in our conversion code that were mishandled because we didn't fully understand the proper logic needed. The conversion code is critical to all of the immediate areas we are working on, because it is responsible for migrating assets from the old Icarus proprietary format into modern day formats that can be used by Unity. Simply put, the more we looked at objects that were moved across, the more we found they looked very odd. Some where see through in places they shouldn't be. Some where shiny in places they shouldn't be. Here is a quick primer before I continue. If you're not interested in the technical bits, feel free to jump down to the Texture map examples below. Most games use a model that looks like this Shader->Material->3D Model. Shaders define inputs and how lighting will get applied. Materials define texture maps, colors, and other properties that are passed into the Shader they are linked to. Texture maps can be Grayscale with 1 channel (black), RGB with 3 channels (Red, Green, Blue) of information or RGBA with 4 channels (Red, Green, Blue, Alpha). And finally, all in-game 3D Models (Objects) have a Material assigned to them. When Fallen Earth was created, they only had Fixed Function pipelines to work with. That means their Shaders were hard coded in C++. Nowadays Shaders are much more diverse. They can operate at the geometry level (Vertex Shaders) or they can operate at the pixel level (Pixel Shaders). And they are mostly written in High Level shader languages like HLSL. The original FE Shaders are split up and named for the lighting model they render such as: Normal Tangent (or DotBump) = materials that will have a Normal map Gloss = materials that will have reflections Alpha Test = materials that will have sections that are see through Fallen Earth also used an older set of Texture maps for all of its Materials. This system attempted to "cheat" the look of real lighting across an object. Diffuse = all color for an object including lighting and shadowing Normal = detailed pixel by pixel curvature data that gives low polygon surfaces much more detail Specular = this is only a grayscale image that defines the specular reflectivity of the object and a couple more Today we use custom Shader Graphs or the more standardized Physics Based Rendering approach (https://en.wikipedia.org/wiki/Physically_based_rendering). PBR attempts to model how light really works as opposed to cheating the look and feel. Unity offers two approaches to PBR: Specular or Metallic Since Fallen Earth already had Specular texture maps, we chose the Specular implementation. This uses different texture maps such as: Albedo = similar to Diffuse with all color for an object but without any shadowing or lighting Normal = detailed pixel by pixel curvature data that gives low polygon surfaces much more detail Occlusion = grayscale texture showing ambient shadows that would occur without specific lighting Specular = RGB texture showing both the color that is reflected and the degree to which parts of the object absorb light or reflect Smoothness = grayscale texture showing what parts of the object are rough versus glossy Now back to our work and some examples. We assumed the Fallen Earth Diffuse texture maps would have coloring in the RGB channels and alpha in the Alpha channel, because this is the default setup in PBR. However this is not the case in a majority of the Fallen Earth materials. In retrospect, this should have been more obvious to us as very few Materials require alpha or transparency. Most objects in the world of Fallen Earth aren't see through. So having empty Alpha channels everywhere would have been wasteful. Instead, they used a "flags" field in each Material to let the system know what the Alpha channel data in the Diffuse texture map would be used for. We found that in nearly 80% of cases, the Specular grayscale data was embedded in the Alpha of the Diffuse texture map. So more recently we had to go back through all the imported texture maps and split out the Alpha channel into a separate image. This allows us to merge that channel into other texture maps to migrate everything properly depending on what that alpha channel actually contains. I'm going to walk through samples of texture maps for the Heavy Assault Battle Suit (this is what it's called in the data - but necessarily what it is called in-game). It contains Phoenix Plate, pants, and boots. For this part, I think it's handy to see the final product of what we imported into Unity to give you some reference on what you're looking at. Look at the red heart painted on the left side of the chest. It will help you figure out what you're looking at later on. In our previous attempts, all of the shiny areas on this model were transparent. I'm sure you can imagine how weird that looked. Next, here is a shot to give you an example of what this suit looks like without any texture maps. Notice the lack of detail and how flat everything is. Here are the texture maps. Diffuse (with the Alpha removed): Notice the red painted heart on the right side. That part of the texture appears to be mapped backwards to the chest so the heart shows up on the left side and not the right. Diffuse Alpha which turned out to be the Specular texture map: White areas have the most shine while black areas are the most dull. Normal texture map: This looks a bit bizarre to most people, and it should. The Normal map isn't painted like the other maps. It's generated from high polygon versions of the same model. Someone very smart decided lower polygon objects could look much better if we mapped the curvature in X,Y,Z values from higher polygon objects on a per pixel level to the R,G,B channels of the Normal map. All the detail you see in that top image comes from this texture map. As part of our conversion process, we alter the Diffuse to remove shadows and lighting (as much as possible) to create the Albedo texture map used by PBR. We got lucky with the Diffuse texture map in this case, because it didn't have a lot of lighting or shadows painted in. And finally here is an example of what the new Occlusion map looks like. As we continued work on the Character System, we came across another recent problem. Fallen Earth has a relatively complex clothing system, and we found that sometimes our new clothing or armor pieces would interpenetrate to show the body parts poking through. This was because we never imported a set of data on the body called "Selection Sets" (we call these Mat Ids today). This data splits up the body into 17 different pieces that can be shown or hidden depending what you're wearing. To fix this, we had to go back, correct the code to import the Selection Set data, then split the meshes up properly, and finally re-import everything back in. Here are some examples of how the Selection Set data is used to show or hide bits of the body depending on what you are wearing. The full Male body looks like this. NOTE: no custom skin color has been applied yet, so it's pure white right now. And here is an outfit assembled from random pieces of clothing we imported (don't judge me for my fashion sense ) Here is what the body looks like using the Selection Set data if I hide the outfit. You can see how areas covered by clothing are hidden so they work properly. Lastly we also went back and captured some data that allows multiple pieces of equipment to be created from a single 3D model. This part of the system uses the definition of equipment items and allows them to show or hide parts of the model to make it more unique. As an example, here is the same Heavy Assault Battle Suit, but the pants, boots and gloves have been hidden to create a Jacket instead. That's it for now. We're going to keep working through all the equipment, heads, hair, weapons and accessories so that we can reconstitute a character properly, and then we'll start work on the basic character movement. Thanks, Matt
  21. Hi everyone, Running a day behind on this. In the last update I laid out the three major areas and we continued to work on those areas this week. Unfortunately our lead on the Login/World Server issue was on vacation. However the other team members continued that work and may have come up with the fix. I'll need the lead to go back through the work and determine whether the analysis is correct. Debugging work continued. The library we were working on now compiles and runs, but we're still working through errors that come up. On the Multi-threading, we are still working through Occlusion. That's a bit of a rabbit hole unfortunately due to some memory that needs to be accessible across threads. Thanks, Matt
  22. Hi all, Missed posting this last night. We had another strong week. I feel like our new agents are working out well. As of today we have: 207 new tickets (continuing to drop from last week) 421 total tickets (also continuing to drop) We are 1 ticket away from responding to tickets submitted on 6/29. This means as of yesterday we are responding in 18 days. Thanks, Matt
  23. Hi everyone, This week was a bit frustrating again. All of the issues currently blocking the Open Beta are based on bugs in code or systems that were originally created ages ago. Worst of all, the areas we are working on are... boring. That means no sexy graphs or cool pictures to highlight. Instead I'm going to run down the list of issues we are working on and try my best to explain them: Login/World server issues handling a lot of players - We spent last week excavating some stress test tools from 2012. We got them working in the middle of this week, which enabled us to start testing why the vast majority of players weren't able to log in or stay connected to the world server during the Beta. We have already verified that the hardware and installations between Live and the Beta environment are exactly the same. Moving to new hardware or reinstalling made no difference to the connection issue. In APB 1.20 (Live), our Login and World servers handle lots of spikes when a bunch of players try connecting at the same time. It also appears that APB 2.0 (Live Consoles) handle load just fine too, although the concurrency on consoles is a lot lower than PC, so there is a possibility that a less severe version of this issue exists there. But there is definitely a problem with APB 2.1. Any time we try to get more than about 50 players trying to connect, we start hanging sessions. Our best guess is that we carried over some piece of code from 1.20 somewhere in the last 1.5 years that appeared to work during our small SPCT/closed beta tests, but fails at scale. The team is continuing to review code step by step to fix the issue. Problems debugging crashes submitted by Open Beta players - This appeared to work fine with our small testing group. In the past, we have noticed small inconsistencies from time to time, but we were still able to get enough info to fix issues. But crash reports submitted during the Open Beta showed lots of inaccurate call stacks and debug info. This has made tracking down bugs much more difficult than it should have been. I can't really get into details on why this system is so complex. It just is, and we're stuck with it. So we have started rewriting code from 2015 that took several developers 4 months to complete. Similar to the first issue, this task is "open ended", which means it's difficult to put a time estimate on since we have to work through each line of code to figure out what is failing. With all of that in mind, we have been making good progress, and it doesn't appear like it will take too much longer to fix. Hoping for a better update next week. Crashing fixes / Multi-threading fixes - This is also ongoing. We feel like we've fixed the majority of crashes from the Open Beta, but some pesky DirectX11 crashes remain. To address those we are going to kill two birds with one stone. The team is moving ahead with some multi-threaded work that we had put off in order to get the Open Beta out. We believe these remaining crashes are caused by a lower level issue in Unreal 3's task handler related to shared memory across threads. The base code for Unreal 3 is nearly 20 years old now and even the newer sections are around 8 years old. Our solution is to use a newer, more powerful library that has become the industry standard for this type of work. We started implementing that library about 3 months ago because it is very stable with more sophisticated ways to schedule tasks and ramp up processing on more powerful machines, while also being much more efficient on lower end machines. So in theory we can fix the remaining crashes while also improving performance. However we can't just "plug it in" for an easy win. The code works substantially different to Unreal's code. So each area where multiple threads are used has to be individually migrated to the new library. The team is moving quickly, and we've already converted some of the bigger systems like Shadows and Occlusion. As soon as we have the rest converted, we can start testing this again internally. Thanks, Matt
  24. Hi all, This week was much better. We are still training a couple new customer service agents, but they are all picking up speed. As of today we have: 285 new tickets (a lot lower than last week) 498 total tickets (a lot lower than last week) We are responding to tickets submitted on 6/17. Thanks, Matt
  25. Hi everyone, I missed the update yesterday with the holiday. Last week was a maintenance week. That meant we backtracked over existing code and systems to diagnose the issues from the Beta. Part of the team resurrected a series of old stress tester applications. We've almost got them working for the Login and World server. That should let us try and narrow down the bad server problems. We also fixed a number of the big crashes. There will be more this week, as we dig into one section of troublesome multithreaded code. Hope everyone got a nice break over the weekend. Thanks, Matt
×
×
  • Create New...