During the week I began looking at how I am approaching the research of my thesis. I occurs to me that I am doing two types of research,
- Story
- Tech
Story:
On the Story side this week I have gotten a fair amount of responses from the questionnaire, the results of which are interesting.
I am not quite sure how to post this information yet since I would like for it to remain anonymous. I will maybe go through and select small portions as snippets of what I find useful.
A lot of the comments from the questionnaire and the research I am finding in the journal articles I find consistent with my experience as a military dependent.
One useful way to look at the narrative I found in some of the literature I have been looking at discusses the stages of a deployment.
Pre-Deployment – From first notification of deployment until deployment occurs
During Deployment – From departure until demobilization
Demobilization – From the unit’s arrival at the demobilization station to departure for its home station
Reintegration – From arrival at home station to 180 days after arrival
Could this be a useful way of looking at the narrative?
It seems that there are a few questions that arise about the visual story telling
- What imagery to use for the video?
- Should it be 360 video
- Should it be Digital content that is representative of the ideas I am trying to get across
- Should it be still imagery?
Tech:
On the tech side this week I scheduled an appointment with a gentleman Vj Dr. Mojo, the owner of A360 studio. A space in Brooklyn that is set up for what I am trying to do with my thesis, a space for immersive environments. The space is a little smaller than I would prefer, but it is already projection mapped and has 5.1 audio built in so that would save me a fair amount of production time.
Mojo showed me several different looks in the space with the walls projection mapped.
I need to send him an email this week with a treatment for the show which is six weeks away, (kind of freaking out).
March 21 is the week of the performance.
More on the tech side, I foresee that there are some problems to solve with the content output as well, the way the space is laid out is a cube or a sphere depending on what type of content is generated so which software can I use to make the content be interactive and also have the resolutions necessary for the output of the projectors.
I am currently looking at Unreal engine and/or Max Jitter for this.
I am also looking at having the HTC vive for the motion capture since the requirements for my project are that the space is small and it is extremely portable unlike the Vicom or Optitrack systems.
I have been put in contact with someone who may be able to help in this regard, I am going to schedule an appointment if possible to try and talk this portion out. I am also meeting with Todd Bryant tomorrow to discuss some of these issues as well.
I essentially just need the x and y coordinates of a couple of markers in space to drive the interactivity of the project.