That function requires a lot of things so I had to see what it needs and all the stuff
Finally I got what it takes to correct the function MX_LWIP_init ();
A two line gave the output which was awaited.
Tuesday, 29 November 2016
Now both the functionalities are combined into the source code and its run but we got an error which i think is a big error because its giving us all the output as a garbage value.
there is some function in the code which is responsible for the protocol LWIP tcp/ip stack, which is putting some garbage in the output. So I had to see what it is about.
Monday, 28 November 2016
Today we integrated mbed tls for authentication purpose to the code for private key and private certificate to ocnnect with the server.
Friday, 25 November 2016
in the code we had o integrate a tcp/ip stack for the connection of alexa with nghttp so we integrated lwip tcp/ip into our project for stm 32 board.
Wednesday, 23 November 2016
Today a new project was given to me by sir for which the work was to be done. It had some different functionalities though which I didn't understand a word. But I did the task given to me by sir.
Monday, 21 November 2016
issues were solved today and finally it started running onto the board.
Binaries were being created and itwas flashed onto the board and finally i was getting output of it.
Thursday, 17 November 2016
A basic project for the particular board was created through STMcubeMX and my module was integrated into the middlewares. Integration was to be done through linking and not to be copied into it. So it had itsdifferent issues and errors.
Tuesday, 15 November 2016
Teraterm and System workbench were installed today and because of the permissions it took all of the day.
Monday, 14 November 2016
Today there was a holiday in company so I did some of my report stuff.
Friday, 11 November 2016
That combination work got completed now Sir asked me to implement it on the real device i.e through an IDE like system workbench on the board STM 32 f767. I had all the tools installed in my PC but Sir wanted it on an another PC. So all the tools were to be installed on an other PC.
So all the cycle starts again.
Thursday, 10 November 2016
All the interfaces are done. Now I had to combine all these in 1 file as all were being executed separately, now all had to be executed from one main file. So all had to be executed from one file and one main function so what I did was I put them in one switch statement and made cases for all.
#include <stdio.h>
int main () {
/* local variable definition */
char grade = 'B';
switch(grade) {
case 'A' :
printf("Excellent!\n" );
break;
case 'B' :
case 'C' :
printf("Well done\n" );
break;
case 'D' :
printf("You passed\n" );
break;
case 'F' :
printf("Better try again\n" );
break;
default :
printf("Invalid grade\n" );
}
printf("Your grade is %c\n", grade );
return 0;
}
Tuesday, 8 November 2016
System interface exposes to multiple modules of the client such as synchronize state event, user inactivity report etc. basically its the whole system we are talking about.
So i finished today with this module and all the modules and tomorrow i have to combine in some form.
Monday, 7 November 2016
The Speaker interface exposes directives and events that are used to adjust volume and mute/unmute a client’s speaker. Alexa supports two methods for volume adjustment, which are exposed through the SetVolume and AdjustVolumedirectives.
Friday, 4 November 2016
The playback controller offer many features of play back includes various events and directives. coding started today on visual studio 2012.
Today itself it got completed as it had less modules.
Wednesday, 2 November 2016
The following diagram illustrates state changes driven by AudioPlayer components. Boxes represent AudioPlayer states and the connectors indicate state transitions.
AudioPlayer has the following states:
IDLE: AudioPlayer is only in an idle state when a product is initially powered on or rebooted and prior to acting on a Playdirective.
PLAYING: When your client initiates playback of an audio stream, AudioPlayer should transition from an idle state to playing.
If you receive a directive instructing your client to perform an action, such as pausing or stopping the audio stream, if the client has trouble buffering the stream, or if playback fails, AudioPlayer should transition to the appropriate state when the action is performed (and send an event to AVS). Otherwise, AudioPlayer should remain in the playing state until the current stream has finished.
Additionally, AudioPlayer should remain in the playing state when:
Reporting playback progress to AVS
Sending stream metadata to AVS
STOPPED: There are four instances when AudioPlayer should transition to the stopped state. While in the playing state, AudioPlayer should transition to stopped when:
An issue with the stream is encountered and playback fails
A ClearQueue directive with a clearBehavior of CLEAR_ALL is received
A Play directive with a playBehavior of REPLACE_ALL is received
While in the paused or buffer_underrun states, AudioPlayer should transition to stopped when a ClearQueue directive to CLEAR_ALL is received.
AudioPlayer should transition from stopped to playing whenever your client receives a Play directive, starts playing an audio stream, and sends a PlaybackStarted event to the AVS.
PAUSED: AudioPlayer should transition to the paused state when audio on the Content channel is paused to accommodate a higher priority input/output (such as user or Alexa speech). Playback should resume when the prioritized activity completes. For more information on prioritizing audio input/outputs, see Interaction Model.
BUFFER_UNDERRUN: AudioPlayer should transition to the buffer_underrun state when the client is being fed data slower than it is being read. AudioPlayer should remain in this state until the buffer is full enough to resume playback, at which point it should return to the playing state.
FINISHED: When a stream is finished playing, AudioPlayer should transition to the finished state. This is true for every stream in your playback queue. Even if there are streams queued to play, your client is required to send a PlaybackFinished event to AVS, and subsequently, transition from the playing state to finished when each stream is finished playing.
AudioPlayer should transition from finished to playing when: